Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1.25.8 #2880

Merged
merged 1 commit into from
Oct 8, 2019
Merged

v1.25.8 #2880

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
12 changes: 12 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,15 @@
Release v1.25.8 (2019-10-08)
===

### Service Client Updates
* `service/datasync`: Updates service API and documentation
* `aws/endpoints`: Updated Regions and Endpoints metadata.
* `service/eventbridge`: Updates service documentation
* `service/firehose`: Updates service API and documentation
* With this release, you can use Amazon Kinesis Firehose delivery streams to deliver streaming data to Amazon Elasticsearch Service version 7.x clusters. For technical documentation, look for CreateDeliveryStream operation in Amazon Kinesis Firehose API reference.
* `service/organizations`: Updates service documentation
* Documentation updates for organizations

Release v1.25.7 (2019-10-07)
===

Expand Down
36 changes: 33 additions & 3 deletions aws/endpoints/defaults.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion aws/version.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ package aws
const SDKName = "aws-sdk-go"

// SDKVersion is the version of this SDK
const SDKVersion = "1.25.7"
const SDKVersion = "1.25.8"
12 changes: 11 additions & 1 deletion models/apis/datasync/2018-11-09/api-2.json
Original file line number Diff line number Diff line change
Expand Up @@ -993,7 +993,8 @@
"PreserveDeletedFiles":{"shape":"PreserveDeletedFiles"},
"PreserveDevices":{"shape":"PreserveDevices"},
"PosixPermissions":{"shape":"PosixPermissions"},
"BytesPerSecond":{"shape":"BytesPerSecond"}
"BytesPerSecond":{"shape":"BytesPerSecond"},
"TaskQueueing":{"shape":"TaskQueueing"}
}
},
"OverwriteMode":{
Expand Down Expand Up @@ -1225,6 +1226,7 @@
"TaskExecutionStatus":{
"type":"string",
"enum":[
"QUEUED",
"LAUNCHING",
"PREPARING",
"TRANSFERRING",
Expand All @@ -1245,11 +1247,19 @@
"Name":{"shape":"TagValue"}
}
},
"TaskQueueing":{
"type":"string",
"enum":[
"ENABLED",
"DISABLED"
]
},
"TaskStatus":{
"type":"string",
"enum":[
"AVAILABLE",
"CREATING",
"QUEUED",
"RUNNING",
"UNAVAILABLE"
]
Expand Down
10 changes: 8 additions & 2 deletions models/apis/datasync/2018-11-09/docs-2.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@
"CancelTaskExecution": "<p>Cancels execution of a task. </p> <p>When you cancel a task execution, the transfer of some files are abruptly interrupted. The contents of files that are transferred to the destination might be incomplete or inconsistent with the source files. However, if you start a new task execution on the same task and you allow the task execution to complete, file content on the destination is complete and consistent. This applies to other unexpected failures that interrupt a task execution. In all of these cases, AWS DataSync successfully complete the transfer when you start the next task execution.</p>",
"CreateAgent": "<p>Activates an AWS DataSync agent that you have deployed on your host. The activation process associates your agent with your account. In the activation process, you specify information such as the AWS Region that you want to activate the agent in. You activate the agent in the AWS Region where your target locations (in Amazon S3 or Amazon EFS) reside. Your tasks are created in this AWS Region.</p> <p>You can activate the agent in a VPC (Virtual private Cloud) or provide the agent access to a VPC endpoint so you can run tasks without going over the public Internet.</p> <p>You can use an agent for more than one location. If a task uses multiple agents, all of them need to have status AVAILABLE for the task to run. If you use multiple agents for a source location, the status of all the agents must be AVAILABLE for the task to run. </p> <p>Agents are automatically updated by AWS on a regular basis, using a mechanism that ensures minimal interruption to your tasks.</p> <p/>",
"CreateLocationEfs": "<p>Creates an endpoint for an Amazon EFS file system.</p>",
"CreateLocationNfs": "<p>Creates an endpoint for a Network File System (NFS) file system.</p>",
"CreateLocationNfs": "<p>Defines a file system on a Network File System (NFS) server that can be read from or written to</p>",
"CreateLocationS3": "<p>Creates an endpoint for an Amazon S3 bucket.</p> <p>For AWS DataSync to access a destination S3 bucket, it needs an AWS Identity and Access Management (IAM) role that has the required permissions. You can set up the required permissions by creating an IAM policy that grants the required permissions and attaching the policy to the role. An example of such a policy is shown in the examples section.</p> <p>For more information, see https://docs.aws.amazon.com/datasync/latest/userguide/working-with-locations.html#create-s3-location in the <i>AWS DataSync User Guide.</i> </p>",
"CreateLocationSmb": "<p>Creates an endpoint for a Server Message Block (SMB) file system.</p>",
"CreateLocationSmb": "<p>Defines a file system on an Server Message Block (SMB) server that can be read from or written to</p>",
"CreateTask": "<p>Creates a task. A task is a set of two locations (source and destination) and a set of Options that you use to control the behavior of a task. If you don't specify Options when you create a task, AWS DataSync populates them with service defaults.</p> <p>When you create a task, it first enters the CREATING state. During CREATING AWS DataSync attempts to mount the on-premises Network File System (NFS) location. The task transitions to the AVAILABLE state without waiting for the AWS location to become mounted. If required, AWS DataSync mounts the AWS location before each task execution.</p> <p>If an agent that is associated with a source (NFS) location goes offline, the task transitions to the UNAVAILABLE status. If the status of the task remains in the CREATING status for more than a few minutes, it means that your agent might be having trouble mounting the source NFS file system. Check the task's ErrorCode and ErrorDetail. Mount issues are often caused by either a misconfigured firewall or a mistyped NFS server host name.</p>",
"DeleteAgent": "<p>Deletes an agent. To specify which agent to delete, use the Amazon Resource Name (ARN) of the agent in your request. The operation disassociates the agent from your AWS account. However, it doesn't delete the agent virtual machine (VM) from your on-premises environment.</p>",
"DeleteLocation": "<p>Deletes the configuration of a location used by AWS DataSync. </p>",
Expand Down Expand Up @@ -806,6 +806,12 @@
"TaskList$member": null
}
},
"TaskQueueing": {
"base": null,
"refs": {
"Options$TaskQueueing": "<p>A value that determines whether tasks should be queued before executing the tasks. If set to <code>Enabled</code>, the tasks will queued. The default is <code>Enabled</code>.</p> <p>If you use the same agent to run multiple tasks you can enable the tasks to run in series. For more information see <a>task-queue</a>.</p>"
}
},
"TaskStatus": {
"base": null,
"refs": {
Expand Down
16 changes: 8 additions & 8 deletions models/apis/eventbridge/2015-10-07/docs-2.json
Original file line number Diff line number Diff line change
Expand Up @@ -576,7 +576,7 @@
"Principal": {
"base": null,
"refs": {
"PutPermissionRequest$Principal": "<p>The 12-digit AWS account ID that you are permitting to put events to your default event bus. Specify \"*\" to permit any account to put events to your default event bus.</p> <p>If you specify \"*\" without specifying <code>Condition</code>, avoid creating rules that might match undesirable events. To create more secure rules, make sure that the event pattern for each rule contains an <code>account</code> field with a specific account ID to receive events from. Rules with an account field don't match any events sent from other accounts.</p>"
"PutPermissionRequest$Principal": "<p>The 12-digit AWS account ID that you are permitting to put events to your default event bus. Specify \"*\" to permit any account to put events to your default event bus.</p> <p>If you specify \"*\" without specifying <code>Condition</code>, avoid creating rules that might match undesirable events. To create more secure rules, make sure that the event pattern for each rule contains an <code>account</code> field with a specific account ID to receive events from. Rules that have an account field match events sent only from accounts that are listed in the rule's <code>account</code> field.</p>"
}
},
"PutEventsRequest": {
Expand Down Expand Up @@ -762,7 +762,7 @@
"EnableRuleRequest$Name": "<p>The name of the rule.</p>",
"ListRulesRequest$NamePrefix": "<p>The prefix matching the rule name.</p>",
"ListTargetsByRuleRequest$Rule": "<p>The name of the rule.</p>",
"PutRuleRequest$Name": "<p>The name of the rule that you're creating or updating.</p>",
"PutRuleRequest$Name": "<p>The name of the rule that you're creating or updating.</p> <p>A rule can't have the same name as another rule in the same Region or on the same event bus.</p>",
"PutTargetsRequest$Rule": "<p>The name of the rule.</p>",
"RemoveTargetsRequest$Rule": "<p>The name of the rule.</p>",
"Rule$Name": "<p>The name of the rule.</p>",
Expand Down Expand Up @@ -875,11 +875,11 @@
"PartnerEventSource$Arn": "<p>The ARN of the partner event source.</p>",
"PartnerEventSource$Name": "<p>The name of the partner event source.</p>",
"PutEventsRequestEntry$Source": "<p>The source of the event. This field is required.</p>",
"PutEventsRequestEntry$DetailType": "<p>Free-form string used to decide which fields to expect in the event detail.</p>",
"PutEventsRequestEntry$Detail": "<p>A valid JSON string. There is no other schema imposed. The JSON string can contain fields and nested subobjects.</p>",
"PutPartnerEventsRequestEntry$Source": "<p>The event source that is generating the evntry.</p>",
"PutPartnerEventsRequestEntry$DetailType": "<p>A free-form string used to decide which fields to expect in the event detail.</p>",
"PutPartnerEventsRequestEntry$Detail": "<p>A valid JSON string. There is no other schema imposed. The JSON string can contain fields and nested subobjects.</p>",
"PutEventsRequestEntry$DetailType": "<p>Free-form string used to decide which fields to expect in the event detail. This field is required.</p>",
"PutEventsRequestEntry$Detail": "<p>A valid JSON object. There is no other schema imposed. The JSON object can contain fields and nested subobjects.</p> <p>This field is required.</p>",
"PutPartnerEventsRequestEntry$Source": "<p>The event source that is generating the evntry. This field is required.</p>",
"PutPartnerEventsRequestEntry$DetailType": "<p>A free-form string used to decide which fields to expect in the event detail. This field is required.</p>",
"PutPartnerEventsRequestEntry$Detail": "<p>A valid JSON object. There is no other schema imposed. The JSON object can contain fields and nested subobjects. This field is required.</p>",
"StringList$member": null,
"TestEventPatternRequest$Event": "<p>The event, in JSON format, to test against the event pattern.</p>"
}
Expand Down Expand Up @@ -952,7 +952,7 @@
"refs": {
"PutTargetsResultEntry$TargetId": "<p>The ID of the target.</p>",
"RemoveTargetsResultEntry$TargetId": "<p>The ID of the target.</p>",
"Target$Id": "<p>The ID of the target.</p>",
"Target$Id": "<p>A name for the target. Use a string that will help you identify the target. Each target associated with a rule must have an <code>Id</code> unique for that rule.</p>",
"TargetIdList$member": null
}
},
Expand Down
3 changes: 1 addition & 2 deletions models/apis/firehose/2015-08-04/api-2.json
Original file line number Diff line number Diff line change
Expand Up @@ -458,7 +458,6 @@
"required":[
"RoleARN",
"IndexName",
"TypeName",
"S3Configuration"
],
"members":{
Expand Down Expand Up @@ -551,7 +550,7 @@
"ElasticsearchTypeName":{
"type":"string",
"max":100,
"min":1
"min":0
},
"EncryptionConfiguration":{
"type":"structure",
Expand Down
6 changes: 3 additions & 3 deletions models/apis/firehose/2015-08-04/docs-2.json
Original file line number Diff line number Diff line change
Expand Up @@ -391,9 +391,9 @@
"ElasticsearchTypeName": {
"base": null,
"refs": {
"ElasticsearchDestinationConfiguration$TypeName": "<p>The Elasticsearch type name. For Elasticsearch 6.x, there can be only one type per index. If you try to specify a new type for an existing index that already has another type, Kinesis Data Firehose returns an error during run time.</p>",
"ElasticsearchDestinationDescription$TypeName": "<p>The Elasticsearch type name.</p>",
"ElasticsearchDestinationUpdate$TypeName": "<p>The Elasticsearch type name. For Elasticsearch 6.x, there can be only one type per index. If you try to specify a new type for an existing index that already has another type, Kinesis Data Firehose returns an error during runtime.</p>"
"ElasticsearchDestinationConfiguration$TypeName": "<p>The Elasticsearch type name. For Elasticsearch 6.x, there can be only one type per index. If you try to specify a new type for an existing index that already has another type, Kinesis Data Firehose returns an error during run time.</p> <p>For Elasticsearch 7.x, don't specify a <code>TypeName</code>.</p>",
"ElasticsearchDestinationDescription$TypeName": "<p>The Elasticsearch type name. This applies to Elasticsearch 6.x and lower versions. For Elasticsearch 7.x, there's no value for <code>TypeName</code>.</p>",
"ElasticsearchDestinationUpdate$TypeName": "<p>The Elasticsearch type name. For Elasticsearch 6.x, there can be only one type per index. If you try to specify a new type for an existing index that already has another type, Kinesis Data Firehose returns an error during runtime.</p> <p>If you upgrade Elasticsearch from 6.x to 7.x and don’t update your delivery stream, Kinesis Data Firehose still delivers data to Elasticsearch with the old index name and type name. If you want to update your delivery stream with a new index name, provide an empty string for <code>TypeName</code>. </p>"
}
},
"EncryptionConfiguration": {
Expand Down