Releases: PaddlePaddle/PaddleNLP
v2.8.0
很高兴地通知大家,飞桨大模型套件发布v2.8.0版本。这个版本中,我们深度优化套件的大模型精调对齐的能力,提升大模型套件在国产计算硬件训推能力,具体工作如下:
- 特色精调和高效对齐:提供自研极致收敛的RsLoRA+算法,大幅提升PEFT训练收敛速度以及训练效果;引入高性能生成加速到RLHF PPO算法,打破 PPO 训练中生成速度瓶颈,PPO训练性能大幅领先。
- 大模型训练提速:通用化支持 FastFNN、FusedQKV等多个大模型训练性能优化方式,大模型训练更快、更稳定。
大模型精调对齐训推优化
- 精调
- 推理
- 新增QWenVL 的静态图推理 #7808
模型新增
- 新增QWenVL 的静态图推理 #7808
- 新增Deberta,Debertav2模型 #8227
- deepset/deberta-v3-large-squad2
- microsoft/deberta-v2-xlarge
- microsoft/deberta-v3-base
- microsoft/deberta-v3-large
- microsoft/deberta-base
- 新增mixtral-of-experts #7803
- mistralai/Mixtral-8x7B-Instruct-v0.1
- mistralai/Mixtral-8x7B-v0.1
- 新增LLama3 #8315
- meta-llama/Meta-llama-3-8b
- meta-llama/Meta-Llama-3-8B-Instruct
- meta-llama/Meta-llama-3-70b
- meta-llama/Meta-Llama-3-70B-Instruct
基础框架升级
- Trainer升级
- AutoParallel升级
- 其他
其他支持
- 新增俄罗斯套娃(matryoshka representation learning)检索策略,节省计算和存储资源。#8165
问题修复
- 日志级别修改,并增加timelog计时日志,兼容不同设备。#8261
- 修复pipeline并行中随机初始化的shared weights不一致的问题,覆盖GPT/OPT等模型。#7772
- 关闭CI及单测中从huggingface hub下载的逻辑 #7798 #8198
- 修复llm的gradio开启chat template时候重复拼接query 和 history的问题。#7992
- 修复GPT模型下载key error问题。#8253
- 修复LlamaRotaryEmbedding #7882
- 修复allreduce dtype的问题 #7876
- 修复框架侧dev分支清理 paddle.jit.dy2static.utils_helperAPI的问题 #7989
- 修复read-data timer在ignore_data_skip=False and skip_profile_timer=False 的问题。#8177
- 修复Wandb单测问题 #8066 #8056
- 修复Trainer同时解析json与命令行列表参数报错问题#7860
- 修复Gradio UI 中的推理问题 #7740 #7788
- 修复 Tokenizer 相关的基础问题 #7797 7870
- 修复 custom devices上loading rng state的问题。#7894
- 修复自动并行打印BF16的loss编码错乱的问题#7874
- 采用float初始化模型,修复静态图自动并行AMP报错问题#8033#8199
- 修复ShardDataloader接口在PipeLine Parallelism下使用错误问题#8014
- 修复llama在custom devices的精度问题。#7895
- 修复NPU AICPU算子问题 #7976
- 修复FusedLinearWithGradAdd少传参数的问题。#8178
What's Changed
- [Unified Checkpoint] Add unified checkpoint training args doc. by @DesmonDay in #7756
- [AutoParallel] Auto Trans PP to VPP by @zhaoyinglia in #7747
- Add codecov check by @zjjlivein in #7760
- [CE] Delete gpt_for_sequence_classification by @ZHUI in #7757
- [DOC] Update trainer.md by @ZHUI in #7761
- [Release] Change version to 2.7.0 by @ZHUI in #7764
- [benchmark]close skip_memory_metrics for ips by @Liujie0926 in #7732
- [Release] Update release.yml to release tags by @ZHUI in #7765
- [AutoParallel] Add Sequence Parallel for Static LLaMA by @JZ-LIANG in #7746
- [New Features] support dynamic src_length by @wj-Mcat in #7740
- Fix unified_checkpoint bug by @DrownFish19 in #7770
- [DONE] aistudio, hf hub, bos update download by @JunnYu in #7608
- [Trainer] Fix dist dataloader eval by @DesmonDay in #7777
- [Paddle-pipelines] Update convert_files_to_dicts_splitter by @w5688414 in #7748
- [PEFT]fix lora model tp when existing other trainable module by @lugimzzz in #7781
- [Paddle-Pipelines] update faiss by @qingzhong1 in #7793
- Fix shared weights sync for PipelineLayer by @DrownFish19 in #7772
- [tests] download slow by @JunnYu in #7798
- [INFER][LLM] Support qwen in fined grained dybatch v1 by @DanGuge in #7644
- Add CE for Distributed Hybrid Parallel by @iosmers in #7782
- add MP2-SP2-pp4-vpp2-SD2-stage1-mbs2-acc8 ce by @tianhaodongbd in #7774
- [Pretrain] Fix eval during pretrain by @DesmonDay in #7806
- pipeline parallel benchmark by @zhangting2020 in #7759
- [Bug fixes] fix br gradio by @wj-Mcat in #7788
- delete useless code for write_cache_kv.cu by @yuanlehome in #7812
- [llm]support qlora pp by @lugimzzz in #7801
- Trainer support simultaneously parse JSON files and cmd arguments. by @greycooker in #7768
- [LLM] Support block_attention/cachekv quant for llama by @RichardWooSJTU in #7649
- [Bug Fix] fix paddle multipy_fwd_func warning message by @BeingGod in #7818
- [llm]fix lora by @lugimzzz in #7824
- fused rms spmd by @liuzhenhai93 in #7830
- [Pretrain] Fix eval during pretrain by @DesmonDay in #7827
- [neural search][fix bug of evaluate.py] by @ZeyuTeng96 in #7832
- [neural search] fix the bug of reading files when calculating the recall scores by @shenghwa in #7836
- [Bug fixes] update chatglm tokenizer by @wj-Mcat in #7797
- [semantic_indexing] fix bug of evaluate.py by @ZeyuTeng96 in #7843
- [faq] fix bug of evaluate.py by @ZeyuTeng96 in #7840
- [text_classification_retrieval_based] fix bug of evaluate.py by @ZeyuTeng96 in #7844
- [LLM] add Qwen-7B-Chat to PaddleNLP unit test by @ziangqin-baidu in #7823
- Support 5.2 bloom by @zhoutianzi666 in #7846
- [unified checkpoint] Fix last checkpoint save by @DrownFish19 in #7854
- [unified checkpoint] fix checkpoint names by @DrownFish19 in #7795
- [New Features]add ranks testing for test_predictor by @wj-Mcat in #7800
- [Auto Parallel] Support dynamic semi-auto training in Llama2 model by @haohongxiang in #7851
- [CI] add ci approval pipelines by @zjjlivein in #7859
- [fix] fix a bug of trainer/argparser.py by @greycooker in #7860
- [Improvement] fix ops improting in utils by @wj-Mcat in #7865
- [Add CE] Add CE for Hybrid Parallism by @iosmers in #7817
- [Unified Checkpoint] Cherry pick empty cache. by @ZHUI in #7868
- Add PPO training. by @guoshengCS in #7305
- Update reward_main.py by @wawltor in #7880
- Update ppo_main.py by @wawltor in #7881
- [LLM] revert benchmark codes by @RichardWooSJTU in #7871
- [LLM]support QWenVL second part by @DanGuge in #7808
- [Bug Fixes] update chatglm1 tokenizer by @wj-Mcat in #7870
- 【AutoParallel】Support 'master_grad' in Llama in static auto-parallelism by @heavyrain-lzy in #7658
- [Bug Fix] fix slice bug in LlamaRotaryEmbedding by @MarioLulab in #7882
- 【AutoParallel】Support bf16 loss in static by @heavyrain-lzy in #7874
- [Bug Fix] fix allreduce tensor dtype by @BeingGod in #7876
- [CE] Add Qwen into CE process by @ziangqin-baidu in #7887
- [Hackathon 5th No.73] ToT by @ErnestinaQiu in #7660
- [CustomDevice] fix loading rng state on custom devices by @SylarTiaNII in #7894
- [LLM] ...
v2.7.2
本版本做了一些小问题的修复
What's Changed
- [Unified Checkpoint] fix checkpoint names by @DrownFish19 in #7794
- [Unified Checkpoint] Fix last checkpoint save by @DrownFish19 in #7810
- [PEFT] Cherry pick lora fix by @lugimzzz in #7826
- [Unified Checkpoint] Fix unified checkpoint by empty cache. by @ZHUI in #7855
- [Fix Download] update converted logic & fix hf hub download subfolder bug by @JunnYu in #7911
- [Cherry-pick] logger level by @KB-Ding in #7920
- [Cherry-pick] RuntimeTimer for the toolkit (#7913) by @KB-Ding in #7921
- [Release] 2.7.2 for paddlenlp bugfix. by @ZHUI in #7892
Full Changelog: v2.7.1...v2.7.2
v2.7.1
本版本做了一些小问题的修复
What's Changed
- 修复了训练恢复遇到的一些问题 @ZHUI in #7771
- 修复了GPT在Pipeline模式下的初始化问题 @DrownFish19 in #7775
- 修复了dist dataloader评估时的问题。 @DesmonDay in #7778
Full Changelog: v2.7.0...v2.7.1
PaddleNLP 2.7.0 Release Note
很高兴地通知大家,飞桨大模型套件发布v2.7.0版本。这个版本中,我们深入优化套件的大模型能力。从易用性、性能、到稳定性都有巨大提升。
总体而言,当前版本更新有以下亮点:
- 统一工具链大模型入口。统一预训练、精调、压缩、推理以及部署等环节的实现代码,到 PaddleNLP/llm目录。
- 全新大模型工具链文档。一站式指引用户从大模型入门到业务部署上线。文档见: https://paddlenlp.readthedocs.io/zh/latest/llm/finetune.html
- 全断点存储机制 Unified Checkpoint。 在存储断点时将模型权重、优化器权重等进行统一safetensors格式存储,不再区分分布式策略存储,并且支持恢复训练的动态扩缩容,大大提高大模型存储的通用性。
- 高效微调升级。支持了高效微调+LoRA同时使用,支持了QLoRA等算法。
大模型训推全流程
- 预训练
- 统一了预训练入口到
llm/run_pretrain.py
。 - 支持了qwen 等模型预训练,支持flash attention。
- 统一了预训练入口到
- 精调
- 支持可LoRA + Linear量化同时使用
- 支持了流水线并行模型 + lora一起使用
- 支持了NEFTune方法
- 添加了QLoRA支持
- 压缩
- 支持PTQ、QAT量化功能,包括A8W8、WINT8、WINT4、A8W4
- 支持SmoothQuant、GPTQ、AWQ等量化算法
Unified Checkpoint
- 在大模型背景下,通常我们需要进行多卡分布式的训练,在保存Checkpoint时所得到的模型权重通常是分片放置的,例如根据张量并行、流水线并行进行切分保存。这种根据分布式策略直接存储Checkpoint的方式非常直接明了,但也存在如下的问题:
- 对下游推理不够友好,当用户希望获取中间阶段保存的Checkpoint做下游推理时,需要手动对模型权重进行合并。
- 不利于应对做恢复训练时,可能会面临的分布式策略改变、训练节点数发生变化的情况。用户往往需要手动对Checkpoint进行处理,增加了操作复杂度。
- 为了最大程度地解决上述的问题,降低用户操作难度,我们对大模型存储框架进行了升级,提出了大模型统一存储方案——Unified Checkpoint。Unified Checkpoint的核心思想是将模型权重、优化器权重等进行统一safetensors格式存储,在Checkpoint存储时不再对分布式策略进行区分,提高大模型存储的通用性。
- Unified Checkpoint具备以下功能与特点:
- 权重存储不区分分布式策略,并采用safetensors格式统一存储;
- 灵活支持大模型训练扩容、缩容等各种情况,能够适配不同分布式训练策略的切换。
模型新增
moka-ai/m3e-base
检索模型BAAI/bge-small-zh-v1.5
检索模型
基础框架升级
- Trainer 升级
- 支持了 "--skip_memory_metrics 0"是,显示实时 显存、内存占用
- 支持 "--unified_checkpoint" "--unified_checkpoint_config" 支持混合并行下模型save,动态扩缩容重启。
- 新增 PretrainModelPipe基础类,支持流水线并行训练。
其他支持 - 支持了paddlenlp commit id 展示
paddlenlp.version.commit
- 支持AI Studio download add save to aistudio hub
问题修复
- 修复了dist_dataloader的一些问题
- 修复了一些模型动转静问题
- 修复了GPT训练的一些bug,移除了GPT2。修复了一些seed设置问题
- 修复了baichuan模型在流水线并行的一些问题。
New Contributors
- @Wennie396 made their first contribution in #6897
- @Wong4j made their first contribution in #7008
- @yuanlehome made their first contribution in #7080
- @Xreki made their first contribution in #7105
- @Tom-Zheng made their first contribution in #7092
- @TimeYWL made their first contribution in #7122
- @From00 made their first contribution in #7168
- @RichardWooSJTU made their first contribution in #7186
- @heavyrain-lzy made their first contribution in #7269
- @LokeZhou made their first contribution in #7337
- @JZ-LIANG made their first contribution in #7301
- @WAI-clear made their first contribution in #7402
- @tianhaodongbd made their first contribution in #7293
- @zzjjay made their first contribution in #7504
- @anexplore made their first contribution in #7558
- @niuliling123 made their first contribution in #7528
- @zxcd made their first contribution in #7577
- @MayYouBeProsperous made their first contribution in #7575
- @iosmers made their first contribution in #7613
- @AndSonder made their first contribution in #7343
- @zhink made their first contribution in #7679
- @kingTLE made their first contribution in #7708
Full Changelog: v2.6.1...v2.7.0
v2.6.1
What's Changed
在v2.6.1版本中,我们做了大量的bug修复,提高了LLM模型和相关组件的稳定性。除了bug修复以外,主要新增功能如下:
- LLM:新增了 qwen 模型,InTokens数据流兼容了Pipeline Parallel,LLM精调支持从多个训练文件加载以及热启动,增强了LLaMA模型的不同recompute粒度
- Trainer: hybrid_parallel_topo_order 选项,并修复了 sharding stage3 的保存模型。
- Paddle-pipelines: 添加了对 ERNIE-Bot-turbo和ERNIE-embedding 的支持, 更新了分层搜索示例并且增强了 ChatPaper 的UI
- Megatron 数据集:添加了加载 megatron 数据集的支持,支持ernie-1.0和T5数据类型
New Contributors
- @xiezheng-XD made their first contribution in #6764
- @carryyu made their first contribution in #6676
- @xiaoxiaohehe001 made their first contribution in #6798
- @MARD1NO made their first contribution in #6865
- @zhoutianzi666 made their first contribution in #6905
- @lchdl made their first contribution in #6964
- @LaiXinyi823 made their first contribution in #6659
Full Changelog: v2.6.0...v2.6.1
v2.6.0
PaddleNLP 2.6 正式版本:全新升级,迈进大模型时代!
我们很高兴宣布,PaddleNLP 2.6版本现已全新升级并正式发布!此次升级标志着我们正式迈入了大模型时代。在PaddleNLP 2.6版本中,我们推出了全新的飞桨大语言模型全流程工具链。这套工具链涵盖了预训练、精调、压缩、推理以及部署等环节,为用户提供了一个完整的端到端大模型解决方案。
我们的工具链全面支持LLaMA 1/2, BLOOM, ChatGLM 1/2, GLM, OPT等主流大模型。这使得用户可以在使用同一套工具的前提下,以低成本的方式尝试各种不同的大模型。
为了支持这套大模型工具链,我们进行了大量的底层和基础框架侧的升级:
- 我们将Trainer API升级成为了4D并行分布式Trainer,这让模型的训练过程变得更加高效。
- 我们实现了高效微调算法LoRA/Prefix Tuning,使得单机可以精调千亿级别的模型。
- 同时,我们还依托PaddleSlim的自研量化算法,在所有支持的大模型上全面实现了无损量化。
这些升级都是为了让我们的用户能在大模型时代中更加轻松地进行模型的训练、优化和部署。我们期待你的试用,并期待你的反馈,让我们一起推进PaddleNLP的发展。在2.5版本到2.6版本中PaddleNLP有 40 位新增Contributors,感谢大家对PaddleNLP开源工作的支持!
New Contributors
- @zws-2019 made their first contribution in #5167
- @qiuwenbogdut made their first contribution in #5098
- @kuizhiqing made their first contribution in #5347
- @46319943 made their first contribution in #5419
- @jiaohuix made their first contribution in #5465
- @kangguangli made their first contribution in #5438
- @vivienfanghuagood made their first contribution in #5563
- @zhiboniu made their first contribution in #5470
- @cyber-pioneer made their first contribution in #5598
- @invokerbyxv made their first contribution in #5622
- @megemini made their first contribution in #5658
- @zhenyun-li made their first contribution in #5683
- @solrex made their first contribution in #5736
- @nemonameless made their first contribution in #5487
- @Yulv-git made their first contribution in #5709
- @wangxinxin08 made their first contribution in #5773
- @AlphaHinex made their first contribution in #5815
- @houj04 made their first contribution in #5820
- @Joker1718 made their first contribution in #5816
- @pkuzyc made their first contribution in #5538
- @jadepeng made their first contribution in #5841
- @KB-Ding made their first contribution in #5886
- @parap1uie-s made their first contribution in #5775
- @zirui made their first contribution in #5866
- @GOH-Gu made their first contribution in #5951
- @yangjianfengo1 made their first contribution in #6069
- @zhangting2020 made their first contribution in #5922
- @rogerserper made their first contribution in #6192
- @wtmlon made their first contribution in #6258
- @qingzhong1 made their first contribution in #6251
- @BeingGod made their first contribution in #6307
- @zhiqiu made their first contribution in #6347
- @DesmonDay made their first contribution in #6435
- @cyk1337 made their first contribution in #6447
- @lxp521125 made their first contribution in #6491
- @littsk made their first contribution in #6425
- @RachelXu7 made their first contribution in #6572
- @wanghuancoder made their first contribution in #6539
- @DrownFish19 made their first contribution in #6570
- @GhostScreaming made their first contribution in #6673
Full Changelog: v2.5.2...v2.6.0
PaddleNLP v2.6.0rc
PaddleNLP v2.5.2
New Features
PPDiffusers
- 新增基于FastDeploy的CycleDiffusionPipeline和动态图版CycleDiffusionPipeline、增加动态图版的Gradio调用界面 #4945 #4830
- 更新LoRA,支持自定义lora_rank #4894 #4925
- 新增ControlNet、支持推理与训练 #5009 #5090
- 升级community目录下clip_guided_stable_diffusion, interpolate_stable_diffusion, lpw_stable_diffusion, stable_diffusion_mega #4920 #4947
AutoNLP
- autonlp文本分类支持使用taskflow进行推理部署 #4896
- 支持文本分类模型finetune和prompt tune训练--评估-压缩-推理全流程#4967 #4963
- 支持visualdl和训练日志分发到每个trial #4990 #5021
基础底座
- 完成MegatronBERT, MobileBert, Reformer, Roformerv2, skep的transformers模型升级
- 新增14个BART中文模型 #4636
- 新增3个文本摘要Taskflow中文模型 #4933
FastGeneration
Bug Fix
PaddleNLP v2.5.1
New Features
PPDiffusers
- PPDiffusers支持从HF Hub加载和上传模型 #4640 #4625
- 新增 AutoEncoder 的训练流程 #4137
- 新增 LoRa ,支持使用lora训练 dreambooth、text_to_image,同步更新上述训练脚本 #4768
AutoNLP
基础底座
- ERNIE-Layout支持re-compute #4490
- Roberta, T5结构新增AutoConverter功能,可以直接加载torch模型
- 将PaddleNLP内所有激活函数统一至
paddlenlp.transformers.activations
#4589 - Nezha, GauAlpha 模型结构完成transformers统一体验升级
- 给 Chineseclip 模型支持AutoModel 的功能 #4585
- 添加 model-zoo 测试样板间 #4398
- 新增BLIP 1.0模型,支持CLIP Interrogator图生文 #4676
- 删除CLIP、ErnieVil、ChineseCLIP中重写的 from_pretrained_v2 方法 #4797
- 新增polynomial学习率变化策略,DataCollatorForLanguageModeling,DataCollatorForWholeWordMask API #4826
UTC
- 新增 utc-xbase, utc-base, utc-medium, utc-mini, utc-micro, utc-nano, utc-pico 版本,默认模型由 utc-large 切换为 utc-base #4716 #4825
- 新增 UTC 英文文档 #4476
Pipelines
- 新增跨模态检索端到端的方案,支持以文搜图的整套服务化部署方案。#4516
Bug Fix
- 修复UIE-X特殊字符预测结果偏移问题 #4687
- 修复Taskflow中zero_shot_text_classification任务本地模型加载失败的问题 #4505
- 修复UTC 模型batch内对cls_positions gather结果不符合预期的问题 #4785
- 修复bos模型下载notebook内的tqdm体验问题 #4603
- 删除多余的protobuf依赖 #4600
- 修复ernie-m自动生成attention_mask的错误 #4494
- 修复pre-release版本下载安装 #4661
- 修改 AutoConverter 中精度对比随机性的问题 #4568
- 修复非community的model权重,在多机或者多卡情况下下载的错误问题 #4491
- 修复information_extraction, unified_sentiment_analysis, model_zoo/uie中参数is_shuffle的传参类型问题 #4460
- 修复 T5 FastGeneration sampling 结果出错的问题 #4624
PaddleNLP v2.5.0
Highlights
PaddleNLP 2.5 正式版本全新升级来了!在PaddleNLP 2.5版本中我们发布了飞桨扩散模型工具箱PPDiffuers, 可以降低扩散模型的研究和使用成本。在产业应用侧我们发布了文档信息抽取UIE-X、统一文本分类UTC、统一情感分析UIE-Senta、无监督问答应用;为了降低端上部署难度,我们开源了最新ERNIE 3.0 Tiny v2 系列模型,同时提供了全量化和词表量化加持的端到端语义理解压缩方案。在基础框侧我们提供 PretrainedConfig 来统一预训练模型配置,同时 Trainer API、Prompt API、数据增强API 等框架API做了升级。在2.5正式发版中我们做了Huggingface生态联合相关工作,欢迎大家Huggingface体验PaddleNLP预训练模型效果。在2.4版本到2.5版本中PaddleNLP有 34 位新增Contributors,感谢大家对PaddleNLP开源工作的支持!下面是PaddleNLP 2.5 正式版本的发版内容介绍。
New Features
PPDiffusers 扩散模型工具库发布
大火的AI绘画扩散模型来了 🔥
PPDiffusers是基于PaddlePaddle的扩散模型工具箱,提供多模态的扩散模型,希望助力开发者快速使用和开发文生图、文生视频、文生文相关扩散模型
SOTA扩散模型Pipelines集合
- 通过pipelines几行代码即可使用 Stable Diffusion 绘画,还能够基于FastDeploy高性能加速;这样出色的模型应用pipelines还有30+,包括最新的中文文生图模型 IDEA/Taiyi-Stable-Diffusion、BAAI/AltDiffusion、MindDiffusion/wukonghuahua。
丰富的Noise Scheduler和模型组件
- 提供丰富的噪声调度器(Noise Scheduler),不仅支持主流使用的DDPM、DDIM 和 PNDM,还支持最新的 DPMSolver,14+ Scheduler供您在速度与质量之间权衡。集成多种 Diffusion 模型组件,如UNet1d、UNet2d、UNet2d Conditional,方便的搭建自己的扩散模型。
全方位的训练和推理教程
- 提供了多场景需求的训练教程,从头训练、领域微调及小样本定制化都可以满足。训练后您自己的模型也可以参照FastDeploy推理教程进行高性能加速。
端上语义理解压缩方案
发布基于ERNIE 3.0 Tiny模型的端上语义理解压缩方案,帮助开发者快速在边缘端设备部署预训练模型
ERNIE 3.0 Tiny V2 轻量级模型 发布
- ERNIE 3.0 Tiny V2在V1的模型的基础上使用了下游知识注入、多任务学习等策略,在out-domain、low-resourced 数据上的效果显著提升
基于 PaddleSlim 全量化压缩方案发布
- 首次发布基于PaddleSlim的全量化加速方案,同时支持词表量化来降低部署内存占用,在精度基本无损的情况下模型预测速度大幅提升
FastDeploy 全场景部署
- FastDeploy 是一款全场景、易用灵活、极致高效的 AI 推理部署工具,大大降低在边缘端部署难度
产业范例库升级
文档智能信息抽取UIE-X 应用
- 场景全面: 覆盖文档信息抽取各类主流任务,支持多语言,满足开发者多样信息抽取落地需求
- 效果领先: 以在多模态信息抽取上有突出效果的模型UIE-X作为训练基座,具有广泛成熟的实践应用性
- 简单易用: 通过Taskflow实现三行代码可实现无标注数据的情况下进行快速调用,一行命令即可开启信息抽取训练,轻松完成
部署上线,降低信息抽取技术落地门槛 - 高效调优: 开发者无需机器学习背景知识,即可轻松上手数据标注及模型训练流程
统一文本分类UTC应用
- SOTA效果:UTC是基于统一语义匹配框架建模的SOTA模型,模型效果刷新FewCLUE和ZeroCLUE两大榜单
- 统一建模:单模型可支持多种任务建模,同时支持多分类、多标签、层次分类多个任务
- 快速迁移:零样本分类和小样本迁移能力强,同时提供Label Studio标注工具标注方法,支持快速调优开发
统一情感分析UIE-Senta应用
- 应用全面:新增uie-senta系列模型,模型效果大幅提升,支持语句情感分类,属性抽取,观点抽取等常用情感分析能力
- 高效调优:提供Label Studio标注工具标注方法,开发者通过简单数据标注,即可快速进行模型训练与调优
- 场景验证:真实应用场景打磨的应用工具,解决隐性情感维度抽取、情感维度聚合等真实场景难题
无监督问答应用
- 应用创新:无监督检索式问答系统(即问答对自动生成智能检索式问答),基于问题生成、UIE答案抽取、检索式问答等应用组合来支持以非结构化文本形式为上下文自动生成QA问答对,生成的问答对语料可以通过无监督的方式构建检索式问答系统。
- 简单应用:通过PaddleNLP Pipelines 提供包括问答语料生成、索引库构建、模型服务部署、WebUI可视化一整套端到端智能问答系统能力
基础框架升级
PretrainedConfig
- 模型配置正式化,配置模型参数更加易用,GPT/T5/Ernie/ErnieM/ErnieLayout/Bart/MBart/Unified_Transformer/Unimo/CodeGen 等模型升级至使用PretrainedConfig
Trainer API
- 新增基础训练能力支持,支持混合精度O1、O2两种模式bf16训练 #3352
- 新增分布式技术能力支持,支持recompute重计算、sharding训练支持 #3352
- 新增
Seq2SeqTrainer
支持 seq2seq 类型模型训练。#3352 - 新增
Memory Tracer
支持监控内存、显存 #4181
模型压缩 API
- 模型压缩 API 接入量化训练、词表压缩等功能,并支持各种策略组合 #3271 #4159 #4011
- 模型压缩 API 支持 ERNIE、UIE、BERT、TinyBERT、ELECTRA、ERNIE-M、RoBERTa、PP-MiniLM 等 #3234
数据增强API
- 新增字和句子级别数据增强策略,新增基于反义词和基于word embedding的近义词表,支持文件输入-输出数据增强 #4194
Prompt API
- Template API 新增支持 Prefix-Tuning 和 UniMC
FastGeneration
- 新增T5生成加速,动转静以及预测库支持 #3763
model.generate()
接口调整,use_faster
参数调整为use_fast
#4213- Transformer 生成加速解除 FFN 中间隐层大小必须是 4 倍的限制 #3592
FastTokenizer
- 更新FastTokenizer 1.0.1, 修复PretrainedFastTokenizer中get_vocab_size关键词参数错误 #4339
- 修复FastTokenizer AddToken接口无法接受AddedToken数据结构的错误。#4380
- 修复FastTokenizer单线程分词仍创建线程的问题。 #4441
SimpleServing
- 新增SimpleServing服务化部署方式,SimpleServing是基于FastAPI的二次封装的服务化部署方式,支持Transformers模型和Taskflow几行代码快速部署,降低开发者服务化部署难度 #2845
Huggingface 生态联合
PaddleNLP首次和Huggingface生态联合,支持所有Model和Tokenizer类支持直接从 Huggingface Hub下载和上传,开发者可以直接从Huggingface体验预训练模型效果
- 所有Model和Tokenizer类支持直接从Huggingface Hub下载和上传
- Text Summarization, Fill Mask, Dialogue Taskflow支持直接从Huggingface Hub加载, 并且连通HuggingFace Inference API
- 新增ConversionMixin, bert和gpt模型的
from_pretrained
支持直接从Huggingface Hub加载torch权重的模型
Bugs
- 修复 load_torch 中的特殊情况 #4383
- 修复 基于SKEP的情感分析tokenizer分词问题 #4357
- 修复 FastGeneration 在 FP16 下生成不在词表中 id 的问题 #3936
- 修复 FastGeneration 在新版 PaddlePaddle eager mode 上使用 FP16 上不可用的问题 #3936
- 修复 UnifiedTransformer 和 UNIMOText 在原生生成式 API 使用问题 #3936
- 修复 BART,MBART,T5 在 4D AttentionMask 下生成报错的问题 #3936
- 修复Windows系统下生态模型下载的问题 #3640 #3670
- 修复
from_pretrained_v2
不能load fp16模型的问题。#3902 - 修复Trainer sharding下保存模型报错的问题。#4220
- 修复Windows下用CPU训练Pegasus文本摘要报错的问题。#4431
Others
- 新增数据下载以及全套数据预处理流程,新增数据集自定义接口以及文档说明 #3269
- T5新增prepare_decoder_input_ids_from_labels method #4331
- 重构CLIP和ERNIE VIL模型,新增ChineseCLIP模型 #4270
- 新增CMSIM_LOCK模型 #4388
- Pipelines支持批量的预测,Pipelines新增ERNIE Vilg文图生成、RocketQAv2、ERNIE-Search英文语义检索 #3432 #3512 #3718 #3906 ;PIpelines新增关键字,语义检索两路召回,新增Docker 镜像构建流程,新增Milvus 2.1向量检索工具 #3864 #3315 #3283
New Contributors
- @JamesLim-sy made their first contribution in #3089
- @bruce0210 made their first contribution in #3209
- @wuhuachaocoding made their first contribution in #3211
- @kztao made their first contribution in #3182
- @paopjian made their first contribution in #3221
- @0x45f made their first contribution in #3277
- @HexToString made their first contribution in #3309
- @Septilliony made their first contribution in #3375
- @Elvisambition made their first contribution in #1799
- @YanhuiDua made their first contribution in #3377
- @Yam0214 made their first contribution in #3370
- @alkaideemo made their first contribution in #3424
- @ShawnNew made their first contribution in #3431
- @qipengh made their first contribution in #3434
- @sijunhe made their first contribution in #3411
- @iamWHTWD made their first contribution in #3527
- @USTCKAY made their first contribution in #3521
- @feifei-111 made their first contribution in #3585
- @Wang-ck123 made their first contribution in #3409
- @chenxiangzhen made their first contribution in #3602
- @ymyjl made their first contribution in #3641
- @sserdoubleh made their first contribution in #3662
- @ChenBinfighting1 made their first contribution in #3677
- @firestonelib made their first contribution in #3755
- @co63oc made their first contribution in #3955
- @zjjlivein made their first contribution in #3969
- @DefTruth made their first contribution in #3999
- @christineaa made their first contribution in #3977
- @shentanyue made their first contribution in #4042
- @LazyFyh made their first contribution in #4102
- @pangyoki made their first contribution in #3954
- @GGBond8488 made their first con...