Read our blogs, tips and tutorials
Try our exercises or test your skills
Watch our tutorial videos or shorts
Take a self-paced course
Read our recent newsletters
License our courseware
Book expert consultancy
Buy our publications
Get help in using our site
547 attributed reviews in the last 3 years
Refreshingly small course sizes
Outstandingly good courseware
Whizzy online classrooms
Wise Owl trainers only (no freelancers)
Almost no cancellations
We have genuine integrity
We invoice after training
Review 30+ years of Wise Owl
View our top 100 clients
Search our website
We also send out useful tips in a monthly email newsletter ...
Road-testing 4 different AI tools, so you don't have to! Part seven of a nine-part series of blogs |
---|
In this blog we'll compare OpenAI's Chat GPT 4, Google's Gemini, Anthropic's Claude 3.5 and Microsoft's Copilot to see which AI tool gives the best results for different types of queries.
|
This test asks each of our AI tools to create the following slightly surreal image:
Create an image in the style of a Constable painting of a kangaroo bungee-jumping above a river. The kangaroo should look excited, but scared too.
A particularly hard thing to picture, I thought - but the AI tools disagreed, as we shall see. Here's what each produced:
Tool | Image |
---|---|
ChatGPT (25 seconds) | |
Gemini | |
Claude | |
Copilot |
I didn't realise when I created this set of tests that neither Gemini nor Claude can - yet - produce images.
Copilot probably just shades this, by giving me a choice of images, but since both ChatGPT and Copilot use the Dall-E image generation tool their answers should be (and are) similar.
Some other pages relevant to the above blogs include:
Kingsmoor House
Railway Street
GLOSSOP
SK13 2AA
Landmark Offices
99 Bishopsgate
LONDON
EC2M 3XD
Holiday Inn
25 Aytoun Street
MANCHESTER
M1 3AE
© Wise Owl Business Solutions Ltd 2024. All Rights Reserved.