Soto 5.0 preview

This article has been updated to reflect the change in name of AWS SDK Swift to Soto.

What is most likely the final 5.0 alpha of Soto has been released. We are almost feature complete. This article covers some of the new features to be found in version 5.0. Below we cover streaming of request and response payloads, credential providers and Codable support for DynamoDB.


With v5.0 it is possible to stream both the bodies of requests and responses, thus allowing the user to upload/download large objects without having to store the complete object in memory. The most obvious use case for this is uploading or downloading files to S3. In Soto 4 S3.PutObject and S3.GetObject would require the whole file in memory. With streaming this is no longer the case.

Request Streaming

Previously operations that include a raw data payload would have a Foundation Data in their request shape. Thus to upload the payload the whole of it had to be in memory. The Data object has been replaced with a AWSPayload object which can be initialised with a String, Data, NIO.ByteBuffer and more importantly a closure that can be used for streaming of payloads. The static function AWSPayload.stream will return a AWSPayload that will stream slices of your payload. The following would upload a 2MB block of data to S3.

let payload = AWSPayload.stream(size: 2*1024*1024) { eventLoop in
    let buffer = giveMeAChunkFromMyByteBuffer()
    return eventLoop.makeSucceededFuture(.byteBuffer(buffer))    
let request = S3.PutObjectRequest(body: payload, bucket: "my-bucket", key: "my-file")
let response = try S3.PutObject(request).wait()

The closure gets repeatedly called until 2MB of data have been supplied. Some operations don't require a size and in that situation once you have finished supplying all your data you should return a eventLoop.makeSucceededFuture(.end) to indicate there is no more data.

As file uploading will probably be the most common operation there is an additional wrapper for supplying a file. It uploads it in 64KB blocks. You need to provide it with NIOFileHandle and NonBlockingFileIO. Checkout Swift NIO documentation on these here. Again if the operation requires a size, as all S3 operations do, you will have to supply that. Assuming you have all of the above, you can upload a file to S3 with the following.

let request = S3.PutObjectRequest(
    body: .fileHandle(nioFileHandle, size: fileSize, fileIO: nonBlockFileIO), 
    bucket: "my-bucket", 
    key: "my-file"
let response = try S3.PutObject(request).wait()

Response Streaming

Response payload streaming is dealt with slightly differently. Every operation that supports response payload streaming has an additional function suffixed with the word Streaming. This function takes a closure that is fed ByteBuffers as they are downloaded from AWS. The following will process S3 data as it is being downloaded

let getRequest = S3.GetObjectRequest(bucket: "my-bucket", key: "my-file")
let response = try s3.getObjectStreaming(getRequest) { byteBuffer, eventLoop in
    return eventLoop.makeSucceededFuture(())

Credential Providers

Previously in Soto the user had limited control over how credentials were provided to the library. Either you supplied credentials explicitly in code, or it would look to find credentials in one of four places: environment variables, shared config file ~/.aws/credentials, ECS IAM policy, or EC2 instance metadata.

This has now been replaced with CredentialProviders. When you create your AWSClient you now provide it with a CredentialProviderFactory which creates a CredentialProvider during the client initialisation. A CredentialProvider is a protocol with a function getCredential which returns an EventLoopFuture<Credential> that will be fulfilled with the AWS credentials when they are available. There are CredentialProviders supporting the various methods of credential acquisition mentioned above. Below is an example which only tests for credentials in the EC2 instance data.

let client = AWSClient(credentialProvider: .ec2, ...)

There is also a selector credential provider which you provide a list of credential providers to and the first one that provides valid AWS credentials is chosen. The default for credential acquisition is the same as in Soto 4 and is implemented using a selector as follows

let client = AWSClient(credentialProvider: .selector(.environment, .ecs, .ec2, .configFile()), ...)

By abstracting the Credential acquisition it has allowed us to build other credential providers that the core library cannot support. These include CredentialProviders for STS and CognitoIdentity. Below is an example of using the .stsAssumeRole credential provider.

import AWSSTS

let request = STS.AssumeRoleRequest(roleArn: "arn:aws:iam::000000000000:role/Admin", roleSessionName: "session-name")
let client = AWSClient(
    credentialProvider: .stsAssumeRole(request, region: .euwest2),
    httpClientProvider: .createNew

DynamoDB Codable

Working with DynamoDB can generate very bulky code. It generally involves custom code for conversion from application format to the DynamoDB structures and is generally quite clunky. There have been a number of changes to improve on this. Given the following struct and instance

struct Person {
    let name: String
    let age: Int
let person = Person(name: "John Smith", age: 36)

This is the v4.0 code for putItem.

let input = DynamoDB.PutItemInput(
    item: [
        "name": .init(s: person.name), 
        "age": .init(n: person.age.description)
    tableName: "my-table"
let output = try dynamoDB.putItem(input).wait()

With v5.0 we have added Codable support for DynamoDB. The library has been extended to include a DynamoDBEncoder and a DynamoDBDecoder which encodes/decodes Codable objects to and from arrays of DynamoDB.AttributeValue. And also custom versions of the most common use functions (putItem, getItem, query, scan, updateItem) which take or return Codable objects have been added. If you make Person conform to Codable the above code is simplified to

let input = DynamoDB.PutItemInput(item: person, tableName: "my-table")
let output = try dynamoDB.putItem(input).wait()

This is easier to read and doesn't require custom code for each object type being inserted into a DynamoDB table.

Check it out

Above we have detailed three of the major new features of Soto 5. These are not the only changes though, v5.0 includes many other improvements and features. If you are using Soto this is probably a good time to checkout v5.0.