오디오북 라이프의 시작

격이 다른 오디오북 생활을 경험해보세요!

  • 언제든 손쉽게 구독해지 가능
  • 월정액 11900원 부터
  • 무제한 청취
  • 총 5만권 이상의 영/한 오디오북
  • 온가족을 위한 다양한 오디오북
지금 바로 시작해보세요!
kr all devices

Pretrain Vision and Large Language Models in Python: End-to-end techniques for building and deploying foundation models on AWS

언어
영어
Format
카테고리

논픽션

Master the art of training vision and large language models with conceptual fundaments and industry-expert guidance. Learn about AWS services and design patterns, with relevant coding examples

Key Features

Learn to develop, train, tune, and apply foundation models with optimized end-to-end pipelines

Explore large-scale distributed training for models and datasets with AWS and SageMaker examples

Evaluate, deploy, and operationalize your custom models with bias detection and pipeline monitoring

Book Description

Foundation models have forever changed machine learning. From BERT to ChatGPT, CLIP to Stable Diffusion, when billions of parameters are combined with large datasets and hundreds to thousands of GPUs, the result is nothing short of record-breaking. The recommendations, advice, and code samples in this book will help you pretrain and fine-tune your own foundation models from scratch on AWS and Amazon SageMaker, while applying them to hundreds of use cases across your organization.

With advice from seasoned AWS and machine learning expert Emily Webber, this book helps you learn everything you need to go from project ideation to dataset preparation, training, evaluation, and deployment for large language, vision, and multimodal models. With step-by-step explanations of essential concepts and practical examples, you’ll go from mastering the concept of pretraining to preparing your dataset and model, configuring your environment, training, fine-tuning, evaluating, deploying, and optimizing your foundation models.

You will learn how to apply the scaling laws to distributing your model and dataset over multiple GPUs, remove bias, achieve high throughput, and build deployment pipelines.

By the end of this book, you’ll be well equipped to embark on your own project to pretrain and fine-tune the foundation models of the future.

What you will learn

Find the right use cases and datasets for pretraining and fine-tuning

Prepare for large-scale training with custom accelerators and GPUs

Configure environments on AWS and SageMaker to maximize performance

Select hyperparameters based on your model and constraints

Distribute your model and dataset using many types of parallelism

Avoid pitfalls with job restarts, intermittent health checks, and more

Evaluate your model with quantitative and qualitative insights

Deploy your models with runtime improvements and monitoring pipelines

Who this book is for

If you’re a machine learning researcher or enthusiast who wants to start a foundation modelling project, this book is for you. Applied scientists, data scientists, machine learning engineers, solution architects, product managers, and students will all benefit from this book. Intermediate Python is a must, along with introductory concepts of cloud computing. A strong understanding of deep learning fundamentals is needed, while advanced topics will be explained. The content covers advanced machine learning and cloud techniques, explaining them in an actionable, easy-to-understand way.

© 2023 Packt Publishing (전자책 ): 9781804612545

출시일

전자책 : 2023년 5월 31일

다른 사람들도 즐겼습니다 ...

언제 어디서나 스토리텔

  • 국내 유일 해리포터 시리즈 오디오북

  • 5만권이상의 영어/한국어 오디오북

  • 키즈 모드(어린이 안전 환경)

  • 월정액 무제한 청취

  • 언제든 취소 및 해지 가능

  • 오프라인 액세스를 위한 도서 다운로드

인기

스토리텔 언리미티드

5만권 이상의 영어, 한국어 오디오북을 무제한 들어보세요

11900 원 /월
  • 계정 1개

  • 무제한 청취

  • 사용자 1인

  • 무제한 청취

  • 언제든 해지하실 수 있어요

지금 바로 시작하기

패밀리

친구 또는 가족과 함께 오디오북을 즐기고 싶은 분들을 위해

매달 17900원 부터
  • 2-3 계정

  • 무제한 청취

  • 2-3 계정

  • 무제한 청취

  • 언제든 해지하실 수 있어요

2 개 계정

17900 원 /월
지금 바로 시작하기