PLMS<\/em>.<\/p>\n\n\n\nMain Course 2: Sampling Methods in Stable Diffusion (cont.)<\/h3>\n\n\n\n Now, let us test another topic, here is our input:<\/p>\n\n\n\n
A portrait of a woman set against a dark background. The subject is positioned in a three-quarter view, facing slightly toward the viewer. The woman is portrayed from the chest up, with her upper body and face prominently displayed. She has a serene expression, characterized by a slight smile that seems to hold a sense of mystery. Her eyes are captivating, with a gaze that follows the viewer from various angles. The woman is portrayed with remarkable realism, with delicate brushwork capturing subtle details in her face and skin tone. Her brown hair is gently layered and frames her face. She is adorned in clothing typical of the era, wearing a dark-colored garment with a veil covering her hair. The painting’s composition is relatively simple, focusing primarily on the subject and her engaging presence.<\/p>\n\n\n\n
The origin of the above input is the textual description of the Mona Lisa<\/em>. Let’ see what can Stable Diffusion give us:<\/p>\n\n\n\n <\/figure>\n\n\n\n <\/figure>\n\n\n\n <\/figure>\n\n\n\n <\/figure>\n\n\n\nWhen it comes to generating a human subject other than a scene from AI, we will become more picky. Guess we are more familiar with our facial features and we can spot the differences more easily. And I think I see Mr. Bean from the above outputs… This time, my best 3 picks are DPM++ 2M<\/em>, DPM2 Karras<\/em> and DPM2 a Karras<\/em>.<\/p>\n\n\n\nFrom the 2 inputs, we can see certain samplers are aligned with a similar style. Such as Euler<\/em>, LMS<\/em>, DPM++ 2M<\/em>, DPM2 Karras<\/em> and others are in one group. DPM2<\/em> a<\/em>, DPM++2S a<\/em> and DPM2 a Karras<\/em> are in another group. And DPM fast<\/em> is on its group. Even if samplers are in the same group, they may have a wide range of variance, just look at LMS<\/em> and PLMS<\/em>.<\/p>\n\n\n\nDessert: Summary of Our Findings<\/h3>\n\n\n\n Let us summarize what we have learned so far:<\/p>\n\n\n\n
\ntypes of prompts<\/li>\n\n\n\n usage of models<\/li>\n\n\n\n differences on samplers<\/li>\n<\/ul>\n\n\n\nWe may use more specific keywords for constructing prompts. And the selection of a model, it is depending on the purpose of your generative AI graphics. Rule of Thumb, go to a model download site, pick your genre then pick the highest-rated model there. Then on the sampler side, separate samplers into different groups according to their styles and generate a testing image from each group. Once you find your favorite style, try to fine tune with each sampler within the group, to get your desired result.<\/p>\n\n\n\n
We can take a look at the following chart to find a faster sampler in different groups. <\/p>\n\n\n\n