Gemini gets new rules of behavior — here’s what the chatbot should be doing

Gemini logo shown on a phone's screen.
(Image credit: Getty Images)

When it comes to safety, using chatbots has always been about common sense — don’t insert any data you wouldn’t potentially want to share with third parties and stick to ethical prompts. But what rules do chatbots themselves follow?

Companies tend to err on the side of caution and have their chatbots go through rigorous testing but they still make mistakes. When Google included AI overviews in search results in May, some were telling users to add glue to pizza or that adding more oil to a fire would help extinguish it. 

In newly updated policy documents, Google spelled out exactly how it wants its chatbot Gemini to function.

Generally no violence, but context matters

Gemini Era

(Image credit: Google)

The first guideline Google lists is the threat to child safety as it says Gemini should not generate outputs that include any child sexual abuse material. The same goes for any outputs that encourage dangerous activities or ones that describe shocking violence with excessive blood and gore.

“Of course, context matters. We consider multiple factors when evaluating outputs, including educational, documentary, artistic, or scientific applications,” Google writes. The reverse would also be true, which means that even in cases where you think there’s nothing malicious about your prompt, it might still trigger an alarm in Gemini which could then flag your prompt as a false positive.

Google admits that ensuring that Gemini sticks to its own guidelines is tricky since there are unlimited was you can interact with Gemini. Furthermore, its replies are also equally limitless since the replies LLMs generate are based on probabilities. If you and a friend ask Gemini a question, it’s very likely that the replies you get won’t be word-for-word copies.

Nonetheless, Google has an internal “red team” whose job it is to put as much stress as they can on Gemini to test its limits so that any leaks can be patched up.

What should Gemini be doing?

LLMs are unpredictable but Google outlined what, at least in theory, Gemini should be doing.

Instead of making assumptions or judging you, Gemini is designed to focus on your specific request and if it's asked to share its opinion if you haven’t already shared your own, it should respond with a range of views. Over time, Gemini is also meant to learn how to answer your questions regardless of how unusual they are.

For example, if you were to ask Gemini for a list of arguments that try to convey why the moon landing was fake, Gemini should say that such a statement is not factual while offering real information. It should also be noted that some people do believe it was staged and provide some of their popular claims.

As Gemini continues to evolve, known challenges Google says it's focusing on include hallucinations, overgeneralizations, and unusual questions. To improve, Google is exploring the use of filters that you can adjust to tailor Gemini’s responses to your specific needs and it's also investing in more research to improve LLMs.

More from Tom's Guide

Category
Arrow
Arrow
Back to MacBook Air
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Storage Type
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 91 deals
Filters
Arrow
Load more deals
Christoph Schwaiger

Christoph Schwaiger is a journalist who mainly covers technology, science, and current affairs. His stories have appeared in Tom's Guide, New Scientist, Live Science, and other established publications. Always up for joining a good discussion, Christoph enjoys speaking at events or to other journalists and has appeared on LBC and Times Radio among other outlets. He believes in giving back to the community and has served on different consultative councils. He was also a National President for Junior Chamber International (JCI), a global organization founded in the USA. You can follow him on Twitter @cschwaigermt.