In-Depth

Using Negative AI Prompts Effectively

Generative AI tools that answer questions or produce images based on text input have become commonplace, and as we all know, the results that they deliver depend heavily on the way that the prompt is written. Interestingly, negative prompts can be a powerful way to help a generative AI tool to give you the results that you want.

As its name implies, a negative prompt is simply an instruction telling a generative AI model not to do something. We have probably all used negative prompts at one time or another without even thinking about it. Consider the following prompt as an example:

Tell me how a CPU works, but avoid technical jargon.

This prompt includes both a positive and a negative statement. The positive statement instructs the generative AI model to explain how a CPU works. The negative statement tells the model not to include any technical jargon.

OK, so this is obviously a super simple example of a negative prompt being used, but there is actually quite a bit more to know about negative prompts, starting with the fact that some models (especially those that create images from text) depend heavily on negative prompt use.

A negative prompt's main purpose is to constrain the model by telling it what you don't want. However, there is such a thing as too much of a good thing. Even though there are models that expect, and even depend on negative prompt use, you don't want to fill your prompt with a lengthy list of don'ts. At best, such a list may confuse the model. At worst, you can severely limit the model's ability to give you a good result. Let me give you an example from the non-AI world.

Let's suppose for a moment that you are shopping for a used vehicle and you visit a Website for a dealership that caries a wide variety of makes and models. Such a Website is undoubtedly going to feature some sort of filter that you can use to help you to find a vehicle that meets your needs. However, the filter options that you select are going to have a direct impact on the number of vehicles that you are shown. Let's suppose for instance that the dealership has thousands of vehicles on their lot and you configure the filter to search for vehicles that are below a particular price, less than five years old, and that have never been in an accident. Assuming that you have specified a reasonable price point, you are probably going to see a lot of results. If on the other hand, you take those exact same filtering criteria and you add to it that the vehicle must be blue, have a sunroof, and be equipped with a heads up display, your list of results is going to become a lot shorter. In fact, there might not even be any results.

The same thing can happen with a generative AI tool. If you specify too many constraints, then the AI model will often have difficulty giving you what you have asked for. This is especially true if the AI interprets one of your constraints as being contradictory to something that you mentioned in the positive portion of your prompt.

So how should you go about using negative prompts? First, begin by deciding whether your constraints should exist as soft negatives or hard negatives. Hard negatives are constraints that are non-negotiable. A generative AI model will generally treat phrases such as no, do not, without, and avoid completely as a hard negative. Such a phrase tells the model not to include whatever you have specified, no matter what.

A soft negative is more of a preference. It's something that you would prefer not to have included in the results, but by using a soft negative, you are opting to allow its use if the model deems it to be necessary. Some examples of phrases that signal a soft negative might include try to avoid, prefer not to, minimize, or shouldn't focus on.

Whether you are using soft negatives, hard negatives, or some combination of the two, it's important to keep your list of constraints concise, but very targeted. In the case of a model that is designed for text output, this often comes down to simply telling the model exactly what you want in a very specific, but concise manner. As an example, you might construct a prompt that says something like, “explain the basics of quantum mechanics, but do not show me any equations”. If constructing a long, complex prompt seems unavoidable, you may be able to provide the AI with a list of the positive and negative elements that you would ideally like to include within the prompt, and then ask the AI model to construct the prompt for you.

Things are not always quite so straightforward when it comes to models that are designed to generate images from text. Some (but certainly not all) of these models expect a positive prompt, followed by a series of negative prompts. These negative prompts are less about keeping undesirable content out of the resulting image, although they can be used for that purpose. Instead, the negative prompts have more to do with setting the style and tone of the resulting image. Here is an example of such a prompt:

A colorful futuristic cityscape with flying cars and lots of neon. No haze. no grain. no muted colors. No fisheye.

This prompt consists of a positive prompt explaining what the image should be, but uses negative prompts to avoid the use of certain common visual effects such as film grain or atmospheric haze.

Although it is somewhat less common, some text to image models rely on negative prompts as a tool for countering distortion within the image. Such a prompt might look like this:

A woman cooking in a commercial kitchen. No extra arms. No warped utensils. No blurry faces.

As text to image models continue to improve, these types of prompts are seeing less frequent use. Even so, there are models that are designed to respond to these types of negative prompts.

Finally, it is worth noting that there are situations in which a model may completely ignore an instruction given through a negative prompt. This primarily happens if the prompt violates the models internal guardrails. As an example, this negative prompt would be ignored by most text to image models:

Someone cooking in a commercial kitchen. No clothing.

A model might also ignore a negative prompt if the prompt is asking for something that the model considers to be impossible. Here is an example:

A child playing outside on a bright, sunny day. No shadows.

In this example, the absence of shadows could be interpreted as being contradictory to the request for sunshine, either confusing the model or causing it to ignore the negative prompt.

About the Author

Brien Posey is a 22-time Microsoft MVP with decades of IT experience. As a freelance writer, Posey has written thousands of articles and contributed to several dozen books on a wide variety of IT topics. Prior to going freelance, Posey was a CIO for a national chain of hospitals and health care facilities. He has also served as a network administrator for some of the country's largest insurance companies and for the Department of Defense at Fort Knox. In addition to his continued work in IT, Posey has spent the last several years actively training as a commercial scientist-astronaut candidate in preparation to fly on a mission to study polar mesospheric clouds from space. You can follow his spaceflight training on his Web site.

Featured

Subscribe on YouTube