Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update API Client from latest released models #1521

Merged
merged 2 commits into from Dec 3, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
22 changes: 22 additions & 0 deletions .changelog/270a627fb296447bb6dcebdd43fcf353.json
@@ -0,0 +1,22 @@
{
"id": "270a627f-b296-447b-b6dc-ebdd43fcf353",
"type": "feature",
"description": "API client updated",
"modules": [
"service/accessanalyzer",
"service/amp",
"service/appmesh",
"service/braket",
"service/codeguruprofiler",
"service/evidently",
"service/grafana",
"service/location",
"service/networkmanager",
"service/nimble",
"service/proton",
"service/ram",
"service/rekognition",
"service/snowdevicemanagement",
"service/wisdom"
]
}
8 changes: 8 additions & 0 deletions .changelog/41575353444b40ffbf474f4155544f00.json
@@ -0,0 +1,8 @@
{
"id": "41575353-444b-40ff-bf47-4f4155544f00",
"type": "release",
"description": "New AWS service client module",
"modules": [
"service/amplifyuibuilder"
]
}
2,408 changes: 2,408 additions & 0 deletions codegen/sdk-codegen/aws-models/amplifyuibuilder.2021-08-11.json

Large diffs are not rendered by default.

8,187 changes: 6,140 additions & 2,047 deletions codegen/sdk-codegen/aws-models/networkmanager.2019-07-05.json

Large diffs are not rendered by default.

545 changes: 316 additions & 229 deletions codegen/sdk-codegen/aws-models/ram.2018-01-04.json

Large diffs are not rendered by default.

14 changes: 11 additions & 3 deletions codegen/sdk-codegen/aws-models/rekognition.2016-06-27.json
Expand Up @@ -469,7 +469,7 @@
}
],
"traits": {
"smithy.api#documentation": "<p>Compares a face in the <i>source</i> input image with\n each of the 100 largest faces detected in the <i>target</i> input image.\n </p>\n \n <p> If the source image contains multiple faces, the service detects the largest face\n and compares it with each face detected in the target image. </p>\n \n \n <note>\n <p>CompareFaces uses machine learning algorithms, which are probabilistic. \n A false negative is an incorrect prediction that\n a face in the target image has a low similarity confidence score when compared to the face\n in the source image. To reduce the probability of false negatives, \n we recommend that you compare the target image against multiple source images.\n If you plan to use <code>CompareFaces</code> to make a decision that impacts an individual's rights,\n privacy, or access to services, we recommend that you pass the result to a human for review and further\n validation before taking action.</p>\n </note>\n\n\n <p>You pass the input and target images either as base64-encoded image bytes or as\n references to images in an Amazon S3 bucket. If you use the\n AWS\n CLI to call Amazon Rekognition operations, passing image bytes isn't\n supported. The image must be formatted as a PNG or JPEG file. </p>\n <p>In response, the operation returns an array of face matches ordered by similarity score\n in descending order. For each face match, the response provides a bounding box of the face,\n facial landmarks, pose details (pitch, role, and yaw), quality (brightness and sharpness), and\n confidence value (indicating the level of confidence that the bounding box contains a face).\n The response also provides a similarity score, which indicates how closely the faces match. </p>\n\n <note>\n <p>By default, only faces with a similarity score of greater than or equal to 80% are\n returned in the response. You can change this value by specifying the\n <code>SimilarityThreshold</code> parameter.</p>\n </note>\n\n <p>\n <code>CompareFaces</code> also returns an array of faces that don't match the source image. \n For each face, it returns a bounding box, confidence value, landmarks, pose details, and quality.\n The response also returns information about the face in the source image, including the bounding box\n of the face and confidence value.</p>\n \n <p>The <code>QualityFilter</code> input parameter allows you to filter out detected faces\n that don’t meet a required quality bar. The quality bar is based on a\n variety of common use cases. Use <code>QualityFilter</code> to set the quality bar\n by specifying <code>LOW</code>, <code>MEDIUM</code>, or <code>HIGH</code>.\n If you do not want to filter detected faces, specify <code>NONE</code>. The default value is <code>NONE</code>. </p>\n\n <p>If the image doesn't contain Exif metadata, <code>CompareFaces</code> returns orientation information for the\n source and target images. Use these values to display the images with the correct image orientation.</p>\n <p>If no faces are detected in the source or target images, <code>CompareFaces</code> returns an \n <code>InvalidParameterException</code> error. </p>\n\n\n <note>\n <p> This is a stateless API operation. That is, data returned by this operation doesn't persist.</p>\n </note>\n\n \n <p>For an example, see Comparing Faces in Images in the Amazon Rekognition Developer Guide.</p>\n <p>This operation requires permissions to perform the <code>rekognition:CompareFaces</code>\n action.</p>"
"smithy.api#documentation": "<p>Compares a face in the <i>source</i> input image with\n each of the 100 largest faces detected in the <i>target</i> input image.\n </p>\n \n <p> If the source image contains multiple faces, the service detects the largest face\n and compares it with each face detected in the target image. </p>\n \n \n <note>\n <p>CompareFaces uses machine learning algorithms, which are probabilistic. \n A false negative is an incorrect prediction that\n a face in the target image has a low similarity confidence score when compared to the face\n in the source image. To reduce the probability of false negatives, \n we recommend that you compare the target image against multiple source images.\n If you plan to use <code>CompareFaces</code> to make a decision that impacts an individual's rights,\n privacy, or access to services, we recommend that you pass the result to a human for review and further\n validation before taking action.</p>\n </note>\n\n\n <p>You pass the input and target images either as base64-encoded image bytes or as\n references to images in an Amazon S3 bucket. If you use the\n AWS\n CLI to call Amazon Rekognition operations, passing image bytes isn't\n supported. The image must be formatted as a PNG or JPEG file. </p>\n <p>In response, the operation returns an array of face matches ordered by similarity score\n in descending order. For each face match, the response provides a bounding box of the face,\n facial landmarks, pose details (pitch, roll, and yaw), quality (brightness and sharpness), and\n confidence value (indicating the level of confidence that the bounding box contains a face).\n The response also provides a similarity score, which indicates how closely the faces match. </p>\n\n <note>\n <p>By default, only faces with a similarity score of greater than or equal to 80% are\n returned in the response. You can change this value by specifying the\n <code>SimilarityThreshold</code> parameter.</p>\n </note>\n\n <p>\n <code>CompareFaces</code> also returns an array of faces that don't match the source image. \n For each face, it returns a bounding box, confidence value, landmarks, pose details, and quality.\n The response also returns information about the face in the source image, including the bounding box\n of the face and confidence value.</p>\n \n <p>The <code>QualityFilter</code> input parameter allows you to filter out detected faces\n that don’t meet a required quality bar. The quality bar is based on a\n variety of common use cases. Use <code>QualityFilter</code> to set the quality bar\n by specifying <code>LOW</code>, <code>MEDIUM</code>, or <code>HIGH</code>.\n If you do not want to filter detected faces, specify <code>NONE</code>. The default value is <code>NONE</code>. </p>\n\n <p>If the image doesn't contain Exif metadata, <code>CompareFaces</code> returns orientation information for the\n source and target images. Use these values to display the images with the correct image orientation.</p>\n <p>If no faces are detected in the source or target images, <code>CompareFaces</code> returns an \n <code>InvalidParameterException</code> error. </p>\n\n\n <note>\n <p> This is a stateless API operation. That is, data returned by this operation doesn't persist.</p>\n </note>\n\n \n <p>For an example, see Comparing Faces in Images in the Amazon Rekognition Developer Guide.</p>\n <p>This operation requires permissions to perform the <code>rekognition:CompareFaces</code>\n action.</p>"
}
},
"com.amazonaws.rekognition#CompareFacesMatch": {
Expand Down Expand Up @@ -2751,7 +2751,7 @@
}
],
"traits": {
"smithy.api#documentation": "<p>Detects text in the input image and converts it into machine-readable text.</p>\n <p>Pass the input image as base64-encoded image bytes or as a reference to an image in an\n Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, you must pass it as a\n reference to an image in an Amazon S3 bucket. For the AWS CLI, passing image bytes is not\n supported. The image must be either a .png or .jpeg formatted file. </p>\n <p>The <code>DetectText</code> operation returns text in an array of <a>TextDetection</a> elements, <code>TextDetections</code>. Each\n <code>TextDetection</code> element provides information about a single word or line of text\n that was detected in the image. </p>\n <p>A word is one or more ISO basic latin script characters that are not separated by spaces.\n <code>DetectText</code> can detect up to 100 words in an image.</p>\n <p>A line is a string of equally spaced words. A line isn't necessarily a complete\n sentence. For example, a driver's license number is detected as a line. A line ends when there\n is no aligned text after it. Also, a line ends when there is a large gap between words,\n relative to the length of the words. This means, depending on the gap between words, Amazon Rekognition\n may detect multiple lines in text aligned in the same direction. Periods don't represent the\n end of a line. If a sentence spans multiple lines, the <code>DetectText</code> operation\n returns multiple lines.</p>\n <p>To determine whether a <code>TextDetection</code> element is a line of text or a word,\n use the <code>TextDetection</code> object <code>Type</code> field. </p>\n <p>To be detected, text must be within +/- 90 degrees orientation of the horizontal axis.</p>\n \n <p>For more information, see DetectText in the Amazon Rekognition Developer Guide.</p>"
"smithy.api#documentation": "<p>Detects text in the input image and converts it into machine-readable text.</p>\n <p>Pass the input image as base64-encoded image bytes or as a reference to an image in an\n Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, you must pass it as a\n reference to an image in an Amazon S3 bucket. For the AWS CLI, passing image bytes is not\n supported. The image must be either a .png or .jpeg formatted file. </p>\n <p>The <code>DetectText</code> operation returns text in an array of <a>TextDetection</a> elements, <code>TextDetections</code>. Each\n <code>TextDetection</code> element provides information about a single word or line of text\n that was detected in the image. </p>\n <p>A word is one or more script characters that are not separated by spaces.\n <code>DetectText</code> can detect up to 100 words in an image.</p>\n <p>A line is a string of equally spaced words. A line isn't necessarily a complete\n sentence. For example, a driver's license number is detected as a line. A line ends when there\n is no aligned text after it. Also, a line ends when there is a large gap between words,\n relative to the length of the words. This means, depending on the gap between words, Amazon Rekognition\n may detect multiple lines in text aligned in the same direction. Periods don't represent the\n end of a line. If a sentence spans multiple lines, the <code>DetectText</code> operation\n returns multiple lines.</p>\n <p>To determine whether a <code>TextDetection</code> element is a line of text or a word,\n use the <code>TextDetection</code> object <code>Type</code> field. </p>\n <p>To be detected, text must be within +/- 90 degrees orientation of the horizontal axis.</p>\n \n <p>For more information, see DetectText in the Amazon Rekognition Developer Guide.</p>"
}
},
"com.amazonaws.rekognition#DetectTextFilters": {
Expand Down Expand Up @@ -4989,7 +4989,7 @@
}
},
"traits": {
"smithy.api#documentation": "<p>The known gender identity for the celebrity that matches the provided ID.</p>"
"smithy.api#documentation": "<p>The known gender identity for the celebrity that matches the provided ID. The known\n gender identity can be Male, Female, Nonbinary, or Unlisted.</p>"
}
},
"com.amazonaws.rekognition#KnownGenderType": {
Expand All @@ -5004,6 +5004,14 @@
{
"value": "Female",
"name": "Female"
},
{
"value": "Nonbinary",
"name": "Nonbinary"
},
{
"value": "Unlisted",
"name": "Unlisted"
}
]
}
Expand Down
Expand Up @@ -8,6 +8,7 @@
"Amp": "aps",
"Amplify": "amplify",
"AmplifyBackend": "amplifybackend",
"AmplifyUIBuilder": "amplifyuibuilder",
"ApiGatewayManagementApi": "execute-api",
"ApiGatewayV2": "apigateway",
"App Mesh": "appmesh",
Expand Down
37 changes: 0 additions & 37 deletions service/accessanalyzer/api_op_CreateAnalyzer.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

37 changes: 0 additions & 37 deletions service/accessanalyzer/api_op_CreateArchiveRule.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

37 changes: 0 additions & 37 deletions service/accessanalyzer/api_op_DeleteAnalyzer.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.