NLP models, like the GPT-3 or the Gopher, — generate sometimes offensive content. It limits usage of them in real life. The Red Teaming (RT) reduces harmful outputs of the NLP model without expensive human annotation. — The NLP models are meant to interact with real people. The GPT-3 and the Gopher are State-of-the-Art NLP models. Yet, both of them produce sometimes harmful content. In real life — such biased models are risky. A bad actor can use these NLP systems to generate toxic speech.