Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Align Your Steps to available schedulers #15751

Open
wants to merge 3 commits into
base: dev
Choose a base branch
from

Conversation

LoganBooker
Copy link

Implements the Align Your Steps noise schedule as described here: https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/howto.html. This includes the sigmas for SDXL and SD 1.5, as well as the recommended interpolation for using larger step sizes.

Description

According to the original work (https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/), AYS can provide better image quality over schedulers such as Karras and Exponential at low step counts (~10). This does appear to bear out in limited testing as can be seen below, though in some cases (such as the tower), it's debatable. It's certain not a panacea; you'll still want to perform at least 15 samples for more consistent, coherent images.

Note I've used 11 steps in the examples below to account for the appending of zero to the sigmas, which is consistent with other schedulers. The alternative would be to truncate/replace the final sigma with zero, but that doesn't seem correct.

Screenshots/videos:

horsemoon_small

tower_small

girl_small

Checklist:

@AG-w
Copy link

AG-w commented May 10, 2024

what's the difference between this?
#15608

I see this pull request use quick start guide numbers and drhead implemented Theorem 3.1 in the paper

@LoganBooker
Copy link
Author

@AG-w I think the main difference is that this implements the schedule as recommended by the authors. My understanding from reading the material is that the provided schedules are the optimized ones using the techniques described in the paper (https://arxiv.org/pdf/2404.14507). The section "B.1. Practical Implementation Details" explains in more detail.

Happy to be corrected if I've misinterpreted or missed anything.

* Consistent with implementations in k-diffusion.
* Makes this compatible with AUTOMATIC1111#15823
@v0xie
Copy link
Contributor

v0xie commented May 21, 2024

Just wanted to put this out there: https://arxiv.org/abs/2405.11326

It's a new method "GITS" that purports to beat AYS in generation speed and sampling quality increase.

These are the sigmas I was able to get from model_wrap.sigmas for the recommended timesteps:
Timesteps: [999, 783, 632, 483, 350, 233, 133, 67, 33, 17, 0]
Sigmas: [14.615, 4.734, 2.567, 1.529, 0.987, 0.652, 0.418, 0.268, 0.179, 0.127, 0.029]

I'm not sure they're correct because they didn't change when I loaded a SDXL model.

@AG-w
Copy link

AG-w commented May 24, 2024

I'm not sure they're correct because they didn't change when I loaded a SDXL model.

what if you calculate the scale between SD1.5 and SDXL sigma in AYS then apply that scale to GITS so you get SDXL version of that sigmas?

something like sigma * (sdxl_ays_sigma / sd15_ays_sigma)


I use this way generated a result for sdxl, need testing though
[14.615, 4.617, 2.507, 1.236, 0.702, 0.402, 0.240, 0.156, 0.104, 0.094, 0.029]

@Koitenshin
Copy link

Koitenshin commented Jun 2, 2024

@LoganBooker I had to type in my own sigmas for 32 steps, which leads me to a feature request for this scheduler. Can someone better than me modify the code to use "script_callbacks.CFGDenoiserParams" in a "while loop" to pull the "total_sampling_steps" variable from "CFGDenoiserParams" and automatically scale down to 0? I can't share my results but they are amazing.

The following is an Edit, I took the time to run some tests and uploaded to Imgur:

First prompt is from here: https://prompthero.com/prompt/cf5ed5a0881
Second prompt is from here: https://prompthero.com/prompt/1107ce59578
Third prompt is from here: https://prompthero.com/prompt/cef4653ee67

Here is a link to the 4 grids for side by side comparison. I used multiple samplers (DPM++ 2S a, DPM2, Euler, & Heun) in different images so you can see better results. 11 Sigmas only performs really well under Heun with complex prompts.

https://imgur.com/a/NQLCD4M

As you can see, Sigmas should be stretched over the amount of steps you use for better prompt coherence.

As for the testing, you will not be able to replicate my results. I'm using a lot of custom forked & edited code that I haven't uploaded to a repo yet, along with 2k generation using a 64k resized seed. I'd use a higher resized seed but 8GB of VRAM on my 3060TI gets maxed out by 64k. I'm also using an SD v1.5 based model for these results, SDXL will have to wait until my setup plays nice with it.

You can test the sigmas yourselves.

def ays_11_sigmas(n, sigma_min, sigma_max, device='cpu'):
    # https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/howto.html
    def loglinear_interp(t_steps, num_steps):
        """
        Performs log-linear interpolation of a given array of decreasing numbers.
        """
        xs = np.linspace(0, 1, len(t_steps))
        ys = np.log(t_steps[::-1])

        new_xs = np.linspace(0, 1, num_steps)
        new_ys = np.interp(new_xs, xs, ys)

        interped_ys = np.exp(new_ys)[::-1].copy()
        return interped_ys

    if shared.sd_model.is_sdxl:
        sigmas = [14.615, 6.315, 3.771, 2.181, 1.342, 0.862, 0.555, 0.380, 0.234, 0.113, 0.029]
    else:
        # Default SD 1.5 sigmas.
        sigmas = [14.615, 6.475, 3.861, 2.697, 1.886, 1.396, 0.963, 0.652, 0.399, 0.152, 0.029]
    	#sigmas = [14.615, 14.158, 13.702, 13.245, 12.788, 12.331, 11.875, 11.418, 10.961, 10.505, 10.048, 9.591, 9.134, 8.678, 8.221, 7.764, 7.308, 6.851, 6.394, 5.937, 5.481, 5.024, 4.567, 4.110, 3.654, 3.197, 2.740, 2.284, 1.827, 1.370, 0.913, 0.457, 0] # 32 Step Sigmas

    if n != len(sigmas):
        sigmas = np.append(loglinear_interp(sigmas, n), [0.0])
    else:
        sigmas.append(0.0)

    return torch.FloatTensor(sigmas).to(device)

def ays_32_sigmas(n, sigma_min, sigma_max, device='cpu'):
    # https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/howto.html
    def loglinear_interp(t_steps, num_steps):
        """
        Performs log-linear interpolation of a given array of decreasing numbers.
        """
        xs = np.linspace(0, 1, len(t_steps))
        ys = np.log(t_steps[::-1])

        new_xs = np.linspace(0, 1, num_steps)
        new_ys = np.interp(new_xs, xs, ys)

        interped_ys = np.exp(new_ys)[::-1].copy()
        return interped_ys

    if shared.sd_model.is_sdxl:
        sigmas = [14.615, 6.315, 3.771, 2.181, 1.342, 0.862, 0.555, 0.380, 0.234, 0.113, 0.029]
    else:
        # Default SD 1.5 sigmas.
        #sigmas = [14.615, 6.475, 3.861, 2.697, 1.886, 1.396, 0.963, 0.652, 0.399, 0.152, 0.029]
    	sigmas = [14.615, 14.158, 13.702, 13.245, 12.788, 12.331, 11.875, 11.418, 10.961, 10.505, 10.048, 9.591, 9.134, 8.678, 8.221, 7.764, 7.308, 6.851, 6.394, 5.937, 5.481, 5.024, 4.567, 4.110, 3.654, 3.197, 2.740, 2.284, 1.827, 1.370, 0.913, 0.457, 0] # 32 Step Sigmas

    if n != len(sigmas):
        sigmas = np.append(loglinear_interp(sigmas, n), [0.0])
    else:
        sigmas.append(0.0)

    return torch.FloatTensor(sigmas).to(device)

Don't forget to add the following lines to the bottom

Scheduler('align_your_steps_11', 'Align Your Steps 11', ays_11_sigmas),
Scheduler('align_your_steps_32', 'Align Your Steps 32', ays_32_sigmas),

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants