State-of-the-art generative AI models like ChatGPT can be tricked into giving instructions on how to make a bomb by simply writing the request in reverse, warn researchers. Large language models ...
Some results have been hidden because they may be inaccessible to you