Multipart upload part sizes are always expected to be of the same size, but this enforcement is now done when you complete an upload instead of being done very time you upload a part.
Fixed a performance issue where concurrent multipart part uploads would get rejected.
CORS preflight responses and adding CORS headers for other responses is now implemented for S3 and public buckets. Currently, the only way to configure CORS is via the S3 API.
Fixup for bindings list truncation to work more correctly when listing keys with custom metadata that have " or when some keys/values contain certain multi-byte UTF-8 values.
The S3 GetObject operation now only returns Content-Range in response to a ranged request.
The R2 put() binding options can now be given an onlyIf field, similar to get(), that performs a conditional upload.
The R2 delete() binding now supports deleting multiple keys at once.
The R2 put() binding now supports user-specified SHA-1, SHA-256, SHA-384, SHA-512 checksums in options.
User-specified object checksums will now be available in the R2 get() and head() bindings response. MD5 is included by default for non-multipart uploaded objects.
The S3 DeleteObjects operation no longer trims the space from around the keys before deleting. This would result in files with leading / trailing spaces not being able to be deleted. Additionally, if there was an object with the trimmed key that existed it would be deleted instead. The S3 DeleteObject operation was not affected by this.
Fixed presigned URL support for the S3 ListBuckets and ListObjects operations.
Uploads will automatically infer the Content-Type based on file body if one is not explicitly set in the PutObject request. This functionality will come to multipart operations in the future.
Fixed an S3 compatibility issue for error responses with MinIO .NET SDK and any other tooling that expects no xmlns namespace attribute on the top-level Error tag.
List continuation tokens prior to 2022-07-01 are no longer accepted and must be obtained again through a new list operation.
The list() binding will now correctly return a smaller limit if too much data would otherwise be returned (previously would return an Internal Error).
Improvements to 500s: we now convert errors, so things that were previously concurrency problems for some operations should now be TooMuchConcurrency instead of InternalError. We’ve also reduced the rate of 500s through internal improvements.
ListMultipartUpload correctly encodes the returned Key if the encoding-type is specified.
S3 XML documents sent to R2 that have an XML declaration are not rejected with 400 Bad Request / MalformedXML.
Minor S3 XML compatability fix impacting Arq Backup on Windows only (not the Mac version). Response now contains XML declaration tag prefix and the xmlns attribute is present on all top-level tags in the response.
Support the r2_list_honor_include compat flag coming up in an upcoming runtime release (default behavior as of 2022-07-14 compat date). Without that compat flag/date, list will continue to function implicitly as include: ['httpMetadata', 'customMetadata'] regardless of what you specify.
cf-create-bucket-if-missing can be set on a PutObject/CreateMultipartUpload request to implicitly create the bucket if it does not exist.
Fix S3 compatibility with MinIO client spec non-compliant XML for publishing multipart uploads. Any leading and trailing quotes in CompleteMultipartUpload are now optional and ignored as it seems to be the actual non-standard behavior AWS implements.
Unsupported search parameters to ListObjects/ListObjectsV2 are now rejected with 501 Not Implemented.
Fixes for Listing:
Fix listing behavior when the number of files within a folder exceeds the limit (you’d end up seeing a CommonPrefix for that large folder N times where N = number of children within the CommonPrefix / limit).
Fix corner case where listing could cause objects with sharing the base name of a “folder” to be skipped.
Fix listing over some files that shared a certain common prefix.
DeleteObjects can now handle 1000 objects at a time.
S3 CreateBucket request can specify x-amz-bucket-object-lock-enabled with a value of false and not have the requested rejected with a NotImplemented error. A value of true will continue to be rejected as R2 does not yet support object locks.
Fixed a bug where the S3 API’s PutObject or the .put() binding could fail but still show the bucket upload as successful.
If conditional headersOpen external link are provided to S3 API UploadObject or CreateMultipartUpload operations, and the object exists, a 412 Precondition Failed status code will be returned if these checks are not met.
Add support for S3 virtual-hosted style pathsOpen external link, such as <BUCKET>.<ACCOUNT_ID>.r2.cloudflarestorage.com instead of path-based routing (<ACCOUNT_ID>.r2.cloudflarestorage.com/<BUCKET>).
Implemented GetBucketLocation for compatibility with external tools, this will always return a LocationConstraint of auto.
When using the S3 API, an empty string and us-east-1 will now alias to the auto region for compatibility with external tools.
GetBucketEncryption, PutBucketEncryption and DeleteBucketEncrypotion are now supported (the only supported value currently is AES256).
Unsupported operations are explicitly rejected as unimplemented rather than implicitly converting them into ListObjectsV2/PutBucket/DeleteBucket respectively.
S3 API CompleteMultipartUploads requests are now properly escaped.
Pagination cursors are no longer returned when the keys in a bucket is the same as the MaxKeys argument.
The S3 API ListBuckets operation now accepts cf-max-keys, cf-start-after and cf-continuation-token headers behave the same as the respective URL parameters.
The S3 API ListBuckets and ListObjects endpoints now allow per_page to be 0.
The S3 API CopyObject source parameter now requires a leading slash.
The S3 API CopyObject operation now returns a NoSuchBucket error when copying to a non-existent bucket instead of an internal error.
Enforce the requirement for auto in SigV4 signing and the CreateBucketLocationConstraint parameter.
The S3 API CreateBucket operation now returns the proper location response header.