meiyouruguo 发表于 2023-10-9 13:33:46

DeepFacelab 训练直播模型英文原版教程附中文翻译


I want to swap my face to a particular celebrity. What I need to do?
If you are novice, learn all about DeepFaceLab https://www.aibl.vip/thread-11-1-1.htmlGather 5000+ samples of your face with various conditions using webcam which will be used for Live. The conditions are as follows: different lighting, different facial expressions, head direction, eyes direction, being far or closer to the camera, etc. Sort faceset by best to 2000.Here public storage https://www.aibl.vip/thread-87-1-1.html with facesets and models.Using pretrained "RTT model 224 V2.zip" from public storage (see above)Make a backup before every stage !
[*]place RTM WF Faceset V2 from public storage (see above) to workspace/data_dst/aligned
[*]place your celeb to workspace/data_src/aligned
[*]do not change settings. Train +500.000
[*]replace dst faceset with your faceset in workspace/data_dst/aligned
[*]continue train +500.000, (optional) deleting inter_AB.npy every 100.000 (save, delete, continue run)
[*]random_warp:OFF, GAN 0.1 power, patch size 28, gan_dims:32. Train +700.000
Using SAEHD model from scratch.res:224, WF, archi:liae-udt, ae_dims:512, e_dims:64, d_dims:64, d_mask_dims:32, eyes_mouth_prio:N, blur_out_mask:Y, uniform_yaw:Y, lr_dropout:Y, batch:8. Others by default.Make a backup before every stage !
[*]place RTM WF Faceset V2 from public storage (see above) to workspace/data_dst/aligned
[*]place your celeb to workspace/data_src/aligned
[*]train +1.000.000 deleting inter_AB.npy every 100.000 (save, delete, continue run)
[*]place your faceset to workspace/data_dst/aligned
[*]do not delete anything, continue train +500.000
[*]random_warp:OFF, GAN 0.1 power, patch size 28, gan_dims:32. Train +700.000
[*]export the model in .dfm format for use in DeepFaceLive. You can also try ordering a deepfake model from someone in Discord or forum.

I want to train ready-to-use face model to swap any face to celebrity, same as public face model. What I need to do?
If you are familiar with DeepFaceLab, then this tutorial will help you:Src faceset is celebrity. Must be diverse enough in yaw, light and shadow conditions. Do not mix different age. The best result is obtained when the face is filmed from a short period of time and does not change the makeup and structure. Src faceset should be xseg'ed and applied. You can apply Generic XSeg to src faceset.Using pretrained "RTT model 224 V2.zip" from public storage (see above)Make a backup before every stage !
[*]place RTM WF Faceset V2 from public storage (see above) to workspace/data_dst/aligned
[*]place your celeb to workspace/data_src/aligned
[*]place model folder to workspace/model
[*]do not change settings, train +500.000 iters, + deleting inter_AB.npy every 100.000 (save, delete, continue run)
[*]random_warp:OFF, GAN 0.1 power, patch size 28, gan_dims:32. Train +700.000
Using SAEHD model from scratchres:224, WF, archi:liae-udt, ae_dims:512, e_dims:64, d_dims:64, d_mask_dims:32, eyes_mouth_prio:N, blur_out_mask:Y, uniform_yaw:Y, lr_dropout:Y, batch:8. Others by default.Make a backup before every stage !
[*]place RTM WF Faceset V2 from public storage (see above) to workspace/data_dst/aligned
[*]place your celeb to workspace/data_src/aligned
[*]train +2.000.000 iters, deleting inter_AB.npy every 100.000-500.000 iters (save, delete, continue run)
[*]random_warp still ON, train +500.000
[*]random_warp:OFF, GAN 0.1 power, patch size 28, gan_dims:32. Train +700.000
reusing trained SAEHD RTM modelModels that are trained before random_warp:OFF, can be reused. In this case you have to delete INTER_AB.NPY from the model folder and continue training from stage where random_warp:ON. Increase stage up to 2.000.000 and more iters. You can delete inter_AB.npy every 500.000 iters to increase src-likeness. Trained model before random_warp:OFF also can be reused for new celeb face.
Is there any way to train a model faster?You can train a model in 1 day on RTX 3090, sacrificing quality.1-day-3090 training. Using pretrained "RTT model 224 V2.zip" from public storage (see above)
[*]place RTM WF Faceset V2 from public storage (see above) to workspace/data_dst/aligned
[*]place your celeb to workspace/data_src/aligned
[*]place model folder to workspace/model
[*]do not change settings, train +25.000 iters
[*]delete inter_AB.npy (save, delete, continue run)
[*]train +30.000 iters
[*]random_warp OFF, GAN 0.1 power, patch size 28, gan_dims:32.
[*]Turn off after 24 hours

I want to change some code and test the result on my local machine. What I need to do?There is ready-to-use VSCode editor inside DeepFaceLive folder located in_internal\vscode.batAll code changes will only affect the current folder.Also you can build a new and clean DeepFaceLive folder with the code from current folder using_internal\build DeepFaceLive.bat



我想把我的脸换成某个名人。我需要做什么?
如果您是新手,请了解有关 DeepFaceLab 的所有信息 https://www.aibl.vip/thread-11-1-1.html使用网络摄像头在各种条件下收集 5000 多个面部样本,用于实时直播。条件如下:不同的光照、不同的面部表情、头部方向、眼睛方向、距离相机远近等。将面部集按最佳排序到2000。这里的公共存储 https://www.aibl.vip/thread-87-1-1.html包含面孔集和模型。使用公共存储中预训练的“RTT model 224 V2.zip”(见上文)每个阶段之前都要进行备份!
[*]将RTM WF Faceset V2从公共存储(见上文)放置到workspace/data_dst/aligned
[*]将您的名人放置到工作区/data_src/aligned
[*]不要更改设置。训练 +500.000
[*]将 dst 面集替换为您在工作区/data_dst/aligned 中的面集
[*]继续训练+500.000,(可选)每100.000删除inter_AB.npy(保存,删除,继续运行)
[*]random_warp:关闭,GAN 0.1 功率,补丁大小 28,gan_dims:32。火车 +700.000
从头开始使用 SAEHD 模型。res:224,WF,archi:liae-udt,ae_dims:512,e_dims:64,d_dims:64,d_mask_dims:32,eyes_mouth_prio:N,blur_out_mask:Y,uniform_yaw:Y,lr_dropout:Y,批次:8。其他默认即可。每个阶段之前都要进行备份!
[*]将RTM WF Faceset V2从公共存储(见上文)放置到workspace/data_dst/aligned
[*]将您的名人放置到工作区/data_src/aligned
[*]train +1.000.000 每 100.000 删除 inter_AB.npy (保存、删除、继续运行)
[*]将您的面集放置到工作区/data_dst/aligned
[*]不要删除任何内容,继续训练 +500.000
[*]random_warp:关闭,GAN 0.1 功率,补丁大小 28,gan_dims:32。火车 +700.000
[*]以 .dfm 格式导出模型以在 DeepFaceLive 中使用。您还可以尝试从 Discord 或论坛中的某人那里订购 Deepfake 模型。

我想训练现成的面部模型,将任何面部替换为名人,就像公共面部模型一样。我需要做什么?
如果您熟悉 DeepFaceLab,那么本教程将帮助您:Src 面孔是名人。偏航、光照和阴影条件必须足够多样化。不同年龄不要混用。当在短时间内拍摄脸部且不改变妆容和结构时,可获得最佳效果。Src 面集应进行 xseg'ed 并应用。您可以将 Generic XSeg 应用于 src 面集。使用公共存储中预训练的“RTT model 224 V2.zip”(见上文)每个阶段之前都要进行备份!
[*]将RTM WF Faceset V2从公共存储(见上文)放置到workspace/data_dst/aligned
[*]将您的名人放置到工作区/data_src/aligned
[*]将模型文件夹放置到工作区/模型中
[*]不更改设置,训练+500.000 iters,+每100.000删除inter_AB.npy(保存,删除,继续运行)
[*]random_warp:关闭,GAN 0.1 功率,补丁大小 28,gan_dims:32。训练+700.000
从头开始使用 SAEHD 模型res:224,WF,archi:liae-udt,ae_dims:512,e_dims:64,d_dims:64,d_mask_dims:32,eyes_mouth_prio:N,blur_out_mask:Y,uniform_yaw:Y,lr_dropout:Y,批次:8。其他默认即可。每个阶段之前都要进行备份!
[*]将RTM WF Faceset V2从公共存储(见上文)放置到workspace/data_dst/aligned
[*]将您的名人放置到工作区/data_src/aligned
[*]训练+2.000.000 iters,每100.000-500.000 iters删除inter_AB.npy(保存,删除,继续运行)
[*]random_warp 仍然开启,训练 +500.000
[*]random_warp:关闭,GAN 0.1 功率,补丁大小 28,gan_dims:32。训练+700.000
重用经过训练的 SAEHD RTM 模型在 random_warp:OFF 之前训练的模型可以重复使用。在这种情况下,您必须从模型文件夹中删除 INTER_AB.NPY 并从 random_warp:ON 的阶段继续训练。将阶段增加到 2.000.000 及更多迭代。您可以每 500.000 次迭代删除 inter_AB.npy 以增加 src-likeness。random_warp:OFF 之前训练的模型也可以重复用于新的名人面孔。
有什么方法可以更快地训练模型吗?您可以在 RTX 3090 上训练一个模型 1 天,但会牺牲质量。1 天 3090 培训。使用公共存储中预训练的“RTT model 224 V2.zip”(见上文)
[*]将RTM WF Faceset V2从公共存储(见上文)放置到workspace/data_dst/aligned
[*]将您的名人放置到工作区/data_src/aligned
[*]将模型文件夹放置到工作区/模型中
[*]不更改设置,训练 +25.000 iters
[*]删除inter_AB.npy(保存、删除、继续运行)
[*]训练 +30.000 iters
[*]random_warp 关闭,GAN 0.1 功率,补丁大小 28,gan_dims:32。
[*]24小时后关闭




dytcps 发表于 2023-10-9 19:36:50

显卡配置太低,玩转不了:'(

3028539077 发表于 2023-10-11 13:22:15

签到领灵石

14x6dy 发表于 2023-10-12 17:48:11

学习,学习,勉强

tengxxx 发表于 2023-10-12 18:52:38

签到领灵石!!!

liqianjie 发表于 2023-10-13 04:00:39

签到领灵石

3028539077 发表于 2023-10-15 21:41:12

谢谢楼主分享

3028539077 发表于 2023-10-16 13:16:47

谢谢楼主分享:$:$:$:$:$:$:$

venushunter 发表于 2023-10-19 14:45:15

想要,但积分太少了

venushunter 发表于 2023-10-20 08:37:10

签到领灵石
页: [1] 2
查看完整版本: DeepFacelab 训练直播模型英文原版教程附中文翻译