Multipart copy integration
In order to support migrating existing data to different different buckets we extended the multipart API to support copying data referenced by existing file handles to a given storage location, see and the updated File Services API documentation.
With the multipart copy we can use the underlying S3 machinery to quickly move big amount of data from one bucket to the other using the provided copy operation from S3 without downloading and re-uploading.
Some notes about the new API:
Only copying from/to S3 buckets is supported (GC is not supported)
Only in region copies are supported, if the source and target reside in different regions the copy cannot be initiated
The user must be the owner of the target storage location
The user must have read and download access to the source file handle through the provided association
The idea is to provide an integration in the python client, below are some ideas:
At a lower level the API integration should work more or less the same as the multipart upload. I suggest to use a part size bigger than a normal upload, since it's amazon doing the copy and no data is transferred from the client (aside from the requests), the TransferManager in the java AWS SDK implementation switches to multipart copies when the file is > 5GB so I suggest to start from something similar and test out various options.
At a higher level I would suggest to have a new operation in the python client such as "migrate" for entities, that can be used to "re-align" an entity to the storage location in the its folder (or optionally a given storage location in input). This would need to fetch the ids of the file handles for each revision of the entity and use the copy API to create new file handles, then update the file handle of the entity revision using the dedicated API: https://rest-docs.synapse.org/rest/PUT/entity/id/version/versionNumber/filehandle.html.
There are some aspects to take into account:
Users might want to create new versions of the entity, rather than updating the previous revision (and I would set this as the default). In this case the normal PUT entity with newVersion=true should be used to link the new file handle.
Users might want to delete the previous file handle after a revision is updated with the new one, this is possible using the dedicated API: https://rest-docs.synapse.org/rest/DELETE/fileHandle/handleId.html. Note however that only the creator of the file handle itself can delete a file handle.
Users might want to have a recursive option and run it on an entire folder or project, but I would leave this as optional since the process might be done in a cluster where each entity is copied separately.
Files that are already in the provided storage location should be skipped, e.g. should be idempotent (and I just realized that the backend should do this check and throw a 40x, I'm pretty sure I forgot to implement it).
I suggest we add an option that allows to stop on failures or continue, e.g. if some of the revisions use file handles that are not in a compatible S3 location (e.g. different region, or not a linked S3 location) the operation might fail. According to the parameter the operation would stop or continue with the next file. We could produce a summary of the operation (e.g. a CSV file).
We might want to support a manifest of entities to move
Finally, even though Amazon confirmed that this is not necessary when the source file does not change, users might want to force the MD5 check of the copied parts, this is possible to do providing the MD5 checksum of the parts when requesting the pre-signed URLs: https://rest-docs.synapse.org/rest/org/sagebionetworks/repo/model/file/BatchPresignedUploadUrlRequest.html. If the part size is bigger than the file size then the file handle MD5 can be used, otherwise the part MD5 must be computed either from a cached version of the file, or using sending a range request for the part on the pre-signed URL of the source file and compute it on the fly. I would put a warning or something about this and an explicit parameter such as forceMD5Check. Maybe an interactive user confirmation should be also in place with an optional parameter to skip the interaction so that the user understands that files might need to be downloaded in order to do this.
Related APIs that might be useful:
https://rest-docs.synapse.org/rest/POST/file/multipart.html: Initiate the multipart copy
https://rest-docs.synapse.org/rest/GET/entity/id/version.html: Fetch the revisions of an entity
https://rest-docs.synapse.org/rest/GET/entity/id/version/versionNumber/filehandles.html: Fetch the file handles of an entity revision
https://rest-docs.synapse.org/rest/POST/file/multipart/uploadId/presigned/url/batch.html: Fetch a batch of pre-signed URLs or file handles (e.g. using the includePreSignedURLs=false and includeFileHandles=true) for a file handle association (might be useful to get the information about a batch of file handles)
https://rest-docs.synapse.org/rest/PUT/entity/id/version/versionNumber/filehandle.html: Updates the file handle of a entity revision
https://rest-docs.synapse.org/rest/PUT/entity/id.html: Updates an entity
, I was able to migrate the test folder that had failed previously. Thank you for fixing this!
, I will try this out this week. Thank you!
I know that there are some wrinkles today about how STS relates to migration etc, but the release candidate for Python client v 2.3.0 includes the utilities to conduct a storage location migration.
Are you able to validate the storage migration aspect of this functionality (leaving aside STS for the moment)? Note that this release candidate includes a fix for the issue that you were encountering with two identical versions if an entity resulting in an exception, which is separately documented in
The release candidate an be installed e.g.
pip3 install --upgrade --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple "synapseclient>=2.3.0"
And the updated documentation can be viewed at the below, an earlier version of which I think you’ve seen.
as we discussed yesterday also would appreciate any feedback you have on the above and the trial version I sent you when you have a chance since it sounds like you will also be a user of this feature.