I just tested ChatGPT vs. Gemini with 7 prompts — here's the winner

AI Madness ChatGPT vs. Gemini round
(Image credit: Future)

In our next round of AI Madness, ChatGPT and Gemini compete for the crown with seven new prompts testing everything from technical problem-solving to creative storytelling.

Both heavy weights are available as standalone apps, and users no longer need an account to access ChatGPT or Gemini.

Each of the chatbots were designed with multimodal capabilities and web integration. They also can adapt responses based on user interactions and context.

However, where the chatbots differ is in their core strengths. ChatGPT is strong in natural conversation, writing, coding and logic reasoning; Gemini is strong in search and fact-based responses.

ChatGPT won the round against Perplexity and Gemini won against Mistral. But who will win this round?

Without further ado, let’s get into the competition with ChatGPT-4o and Gemini Flash 2.0!

1. Explanation and analogies

AI Madness

(Image credit: Future)

AI Madness screenshot

(Image credit: Future)

Prompt: "Explain quantum computing to a 10-year-old, using an analogy about pizza."

ChatGPT included a clearly structured comparison with formatting to highlight key concepts. The chatbot delivered a strong conceptual explanation of superposition through its “pizza in the box” metaphor.

Gemini used a more practical problem-solving approach, focusing on finding the best pizza combo as the central task. The response is more conversational with bullet points highlighting key concepts.

Winner: Gemini wins for a simpler explanation that more closely follows the prompt to address a 10-year-old’s level of understanding. It focuses on a problem-solving scenario that kids can relate to with a more conversational tone would engage a child.

2. Creativity

AI Madness screenshot

(Image credit: Future)

AI Madness screenshot

(Image credit: Future)

Prompt: "Write a short story about a detective who solves crimes through time travel, but include a plot twist at the end."

ChatGPT crafted a more traditional detective story with a clear setup, investigation, and resolution. The pacing of the story, rich world building and satisfying ending were more conventional.

Gemini
was the more ambitious author here with a more distinctive prose style, stronger philosophical themes, and a truly mind-bending twist that recontextualizes the entire story.

Winner: Gemini wins for a story that engages more deeply with the implications of time travel rather than using it as a just a detective tool. The chatbot’s response was also more conceptually interesting, creative, and thought-provoking.

3. Critical analysis

AI Madness screenshot

(Image credit: Future)

AI Madness screenshot

(Image credit: Future)

Prompt: "Compare and contrast three different approaches to addressing climate change, with their pros and cons."

ChatGPT used more concise bullet points with broader statements and explicit definitions for each approach before listing pros and cons. It crafted a concluding paragraph rather than a bulleted summary.

Gemini placed a stronger emphasis on the global cooperation challenges while also having a more comprehensive listing of specific actions and examples under each approach. The chatbot also offered better visual organization with nestled bullet points for subcategories.

Winner: Gemini wins for more specific examples of what each approach entails in practice, more technical details without sacrificing readability, and a summary at the end that effectively ties the approaches together.

4. Technical problem-solving

AI Madness screenshot

(Image credit: Future)

AI Madness screenshot

(Image credit: Future)

Prompt: "Design a database schema for a social media platform that needs to support the following features: user profiles, friend connections, posts with text and images, comments on posts, likes on both posts and comments, and user groups. Explain your choice of tables, fields, relationships, and any indexes you would create to optimize performance. Also address how your schema handles potential scalability challenges as the user base grows to millions of users."

ChatGPT covered all required features, including user profiles, friend connections, posts, comments, likes, and user groups. However, the response doesn’t address scalability challenges, such as handling large user basis or high traffic. The response also doesn’t discuss data normalization techniques to minimize data redundancy nor does it properly address security considerations.

Gemini responded with clearer formatting and slightly more detailed explanations than ChatGPT's. That chatbot used consistent naming conventions throughout the schema, making it easier to read and compare.

Winner: Gemini wins for a response that includes brief descriptions for each field, making it easier to understand the schema.

5. Multilingual capabilities

AI Madness screenshot

(Image credit: Future)

AI Madness screenshot

(Image credit: Future)

Prompt: "Translate this English phrase into French, Spanish, Japanese, and Arabic: 'The early bird catches the worm, but the second mouse gets the cheese.'"

ChatGPT acknowledged potential cultural differences and nuances in translating idiomatic expressions. It prioritized accuracy by providing direct translations, pronunciation guides (for Japanese and Arabic), and explanations for each language.

Gemini provided direct translations for each but did not discuss potential cultural differences or limitations. It did not include pronunciation guides.

Winner: ChatGPT wins for demonstrating a more comprehensive understanding of translation challenges and cultural nuances.

6. Practical instruction

AI Madness screenshot

(Image credit: Future)

AI Madness screenshot

(Image credit: Future)

Prompt: "Create a step-by-step meal plan for someone who wants to start eating more plant-based foods but has never cooked vegetables before."

ChatGPT created a meal plan with diverse and flavorful recipes. However, the chatbot included an overwhelming number of ingredients and difficult recipes (i.e. spinach-artichoke gnocchi) that might intimidate beginners.

Gemini provided clear, easy-to-follow steps for each recipe. The meal plan was less complex with a shopping list that was manageable and easy enough for a beginner. The helpful tips and encouragement were a nice bonus.

Winner: Gemini wins for a response that is more suitable for someone who has never cooked vegetables before, providing a gentle introduction to plant-based cooking.

7. Ethical reasoning

AI Madness screenshot

(Image credit: Future)

AI Madness screenshot

(Image credit: Future)

Prompt: "Analyze the ethical implications of using AI-generated content in academic research papers without disclosure."

ChatGPT correctly identified transparency, authorship, plagiarism, quality, and academic integrity as crucial concerns but in less depth than Gemini. The chatbot offered fewer examples and doesn’t cover all the ethical implications such as institutional policies.

Gemini
delved deeper into the implications of AI-generated content on academic integrity and skill development. It provided an in-depth examination of the ethical implications, covering authorship, transparency, bias, academic integrity, and institutional policies.

Winner: Gemini wins for a more thorough understanding of the ethical implications and provides a clearer, more comprehensive analysis.

Overall winner: Gemini

AI Madness brackets

(Image credit: Future)

Throughout the seven tests, Gemini consistently demonstrated exceptional performance, showcasing its capabilities in various domains. Gemini excelled in providing clear, concise, and well-structured responses, making it easier for users to understand complex topics.

Gemini's responses showcased its ability to adapt to diverse prompts, from database schema design to plant-based meal planning and ethical considerations in academic research.

Its user-centric approach, combined with its technical expertise and creativity, make Gemini an outstanding AI chatbot. Gemini's impressive performance earns it the title of overall winner.

More from Tom's Guide

Category
Arrow
Arrow
Back to Gaming Laptops
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 25 deals
Filters
Arrow
(15.6-inch 512GB)
Our Review
1
MSI Cyborg 15 15.6” 144Hz FHD...
Amazon
Low Stock
Our Review
3
Cyborg 15 AI A1VFK-060CA...
Walmart
(15.6-inch)
Our Review
4
MSI 15.6" Cyborg 15 Gaming...
BHPhoto
(Black)
Our Review
5
MSI Cyborg 15 Gaming Laptop,...
Amazon
Low Stock
(1TB SSD)
Our Review
8
Cyborg 15 AI A1VFK-060CA...
Walmart
Low Stock
Our Review
10
MSI Cyborg 15 2023 | 15.6"...
Walmart
Show more
Amanda Caswell
AI Writer

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Read more
ChatGPT vs Gemini
I put Gemini vs ChatGPT to the test with 7 prompts — here's the winner
ChatGPT vs Gemini
I tested ChatGPT Plus vs Gemini Advanced with 7 prompts — here’s the winner
AI Madness ChatGPT vs Perplexity
I tested ChatGPT vs. Perplexity with 5 prompts to crown a winner
Gemini vs. Mistral AI for AI Madness graphic
I tested Gemini vs. Mistral with 5 prompts to crown a winner
ChatGPT and Deepseek side by side on smartphones
I tested ChatGPT vs DeepSeek with 10 prompts — here’s the surprising winner
Gemini vs Perplexity logos
I tested Gemini 2.0 vs Perplexity with 7 prompts created by DeepSeek — here's the winner
Latest in AI
The Dnsys X1 Exoskeleton being worn
I tested an AI exoskeleton to help treat my immune arthritis — here’s what happened
Squid Game star Lee Jung Jae appearing in an advert for Perplexity
Perplexity just brought in a 'Squid Game' star to convince you to ditch Google
Apple Peek Performance
Apple makes a move to revive its Siri revamp — and the Vision Pro boss could play a part
A TV with the Netflix logo sits behind a hand holding a remote
I tried these 7 ChatGPT prompts to supercharge my Netflix viewing experience
AI Madness ChatGPT vs. Gemini round
I just tested ChatGPT vs. Gemini with 7 prompts — here's the winner
DeepSeek vs Meta AI Madness logo
I just tested DeepSeek vs. Meta AI with 5 prompts — here's the winner
Latest in Face Off
Samsung Galaxy S25 Edge next to Galaxy S25 Plus
Samsung Galaxy S25 Edge vs. Galaxy S25 Plus: Everything we know so far
AI Madness ChatGPT vs. Gemini round
I just tested ChatGPT vs. Gemini with 7 prompts — here's the winner
Pixel 9a vs Pixel 7a side by side composite.
Google Pixel 9a vs. Pixel 7a: Biggest differences explained
DeepSeek vs Meta AI Madness logo
I just tested DeepSeek vs. Meta AI with 5 prompts — here's the winner
The image shows the Helix Twilight on the left and the DreamCloud Hybrid Mattress on the right in a side by side comparison
Helix Twilight vs DreamCloud Hybrid: Which firm mattress suits your sleep style?
Leonardo.Ai vs. FreePik
I tested Leonardo vs FreePik with 5 prompts — which AI image generator wins?