Skip to content

Latest commit

 

History

History
131 lines (80 loc) · 4.49 KB

README_iam.md

File metadata and controls

131 lines (80 loc) · 4.49 KB

O'Reilly Book

Data Science on AWS

YouTube Videos, Meetups, Book, and Code: https://datascienceonaws.com

Data Science on AWS

Workshop Description

In this hands-on workshop, we will build an end-to-end AI/ML pipeline for natural language processing with Amazon SageMaker. We will train and tune a text classifier to classify text-based product reviews using the state-of-the-art BERT model for language representation.

To build our BERT-based NLP model, we use the Amazon Customer Reviews Dataset which contains 150+ million customer reviews from Amazon.com for the 20 year period between 1995 and 2015. In particular, we train a classifier to predict the star_rating (1 is bad, 5 is good) from the review_body (free-form review text).

Learning Objectives

Attendees will learn how to do the following:

  • Ingest data into S3 using Amazon Athena and the Parquet data format
  • Visualize data with pandas, matplotlib on SageMaker notebooks
  • Run data bias analysis with SageMaker Clarify
  • Perform feature engineering on a raw dataset using Scikit-Learn and SageMaker Processing Jobs
  • Store and share features using SageMaker Feature Store
  • Train and evaluate a custom BERT model using TensorFlow, Keras, and SageMaker Training Jobs
  • Evaluate the model using SageMaker Processing Jobs
  • Track model artifacts using Amazon SageMaker ML Lineage Tracking
  • Run model bias and explainability analysis with SageMaker Clarify
  • Register and version models using SageMaker Model Registry
  • Deploy a model to a REST Inference Endpoint using SageMaker Endpoints
  • Automate ML workflow steps by building end-to-end model pipelines using SageMaker Pipelines

Workshop Agenda

Workshop Agenda

Workshop Paths

Quick Start (All-In-One Workshop Path)

Workshop Paths

Additional Workshop Paths per Persona

Workshop Paths

Workshop Contributors

Workshop Contributors

Workshop Instructions

1. Login to AWS Console

IAM

2. Create TeamRole IAM Role

IAM

Roles

Create Role

Select Service

Select Policy

Add Tags

Review Name

3. Update IAM Role Policy

Select IAM

Edit TeamRole

Click Attach Policies.

IAM Policy

Select AmazonS3FullAccess and click on Attach Policy.

Note: Reminder that you should allow access only to the resources that you need.

Attach Admin Policy

4. Launch SageMaker Studio

Open the AWS Management Console

Back to SageMaker

In the AWS Console search bar, type SageMaker and select Amazon SageMaker to open the service console.

Notebook Instances

Create Studio

Pending Studio

Open Studio

Loading Studio

5. Launch a new Terminal within Studio

Click File > New > Terminal to launch a terminal in your Jupyter instance.

Terminal Studio

6. Clone this GitHub Repo in the Terminal

Within the Terminal, run the following:

cd ~ && git clone https://github.com/data-science-on-aws/workshop

If you see an error like the following, just re-run the command again until it works:

fatal: Unable to create '/home/sagemaker-user/workshop/.git/index.lock': File exists.

Another git process seems to be running in this repository, e.g.
an editor opened by 'git commit'. Please make sure all processes
are terminated then try again. If it still fails, a git process
may have crashed in this repository earlier:
remove the file manually to continue.

Note: This is not a fatal error ^^ above ^^. Just re-run the command again until it works.

7. Start the Workshop!

Navigate to 00_quickstart/ or 01_oreilly_book/ in your Jupyter notebook and start the workshop!

You may need to refresh your browser if you don't see the new workshop/ directory.

Start Workshop