What is GPT-4 better at?
If you want to skip reading this article, the answer is pretty much everything. Everything GPT-3.5 can do, GPT-4 can do better. I’ve been messing around with it in various ways for the last few weeks and the only thing GPT-4 sucks at is speed. It depends on when you use it (ChatGPT writes faster at night when less people are using it) but overall, I’d say GPT-4 is only about 25% as fast as GPT-3.5.
Speed issues can be frustrating when you’re not getting the output you want, and you need to regenerate a response after changing your prompt. But that’s the only downside.
In this article we’ll break down the various ways that ChatGPT-4 is better than ChatGPT-3.5
Hallucinations
One of the biggest problems of GPT-3.5 is that it makes stuff up. If you asked for 20 facts about breakfast sandwiches, it would give you 20 bullet points or a numbered list with 20 pieces of information. Some of those would be made up. It’s like 3.5 really doesn’t want to disappoint anyone. With GPT-4, you still occasionally get wrong answers, but it’s more likely to stop writing and say something like, “I’m sorry I can’t finish this because there isn’t enough relevant information.”
Math and Problem Solving
GPT-4 is much better at math compared to its predecessors, thanks to improvements in its architecture and training data. This enables the model to handle more complex mathematical problems and calculations with greater accuracy and speed. With 3.5, sometimes the model would struggle to give you the answer to very basic math questions. With GPT-4, this almost never happens.
GPT-4 is also much better at problem solving. When my friend’s son was born, he had me guess the name of the kid by playing Wordle. At one point I knew there was an A, an I, and a U in the name. I knew there was an A was in the second position, but I didn’t know where anything else went.
So, being the ChatGPT fiend that I am, I tried to get GPT-3.5 to figure out the answer. It went shockingly bad. I kept asking for 6-letter boy names, but it couldn’t even handle that. It kept giving me a list of random names, then 5-letter names, then names where it said the A was in the second position but it wasn’t. A gorilla using sign language would have done a better job.
GPT-4 however, nailed it on the first try.
The answer was #7. This was a while ago and I don’t remember my first two guesses but I’m pretty sure that GPT-4 could have nailed it down to KAIRUS with additional information.
Although, if you look at the list, you'll see that seven of the entries don’t have all the letters I asked for, but at least all of them have an A in the second position. So, there’s still some room for improvement here. But it’s still vastly superior to 3.5 who couldn’t even give me a six-letter name at the time, or would insist that names like Aidan fit my criteria. (Bad robot!)
Writing
GPT-4 just writes better. Its writing is more original, cleaner, and more engaging than that of its predecessors. The advancements in GPT-4's architecture and the vast amount of training data it has been exposed to have enabled it to generate prose that is not only contextually relevant but also stylistically diverse. This results in a smoother and more natural flow of ideas, making the content more enjoyable and informative for readers.
Coding
GPT-4 also codes better. When I tried to get GPT-3.5 to program a simple web game (Name Five) it couldn’t do it on the first try. I had to ask it to generate each function one at a time. It could do the HTML and CSS no problem, but the JavaScript required a lot of manual editing.
Then along came GPT-4 so I asked it to code the entire thing in one go. And it did it. There were no errors, the code was smoother, and the program looked and acted better.
Visual Inputs
When GPT-4 was released, the developers hosted a live event on YouTube where they showcased the various capabilities of GPT-4. One of these was visual inputs. GPT-4 can look at image, and turn it into usable information. So you could upload a meme and ask it why this is funny. Or you could upload a photo of a restaurant menu and ask it what the healthiest option is for someone who is allergic to seafood.
While visual input mode still hasn’t been released to the public, it looks like a fantastic feature. In the developer video, they uploaded a napkin sketch of a website, and then asked GPT-4 to code the JavaScript, CSS, and HTML for that website. Which it did. And it worked great.
GPT3.5 not only can’t read images, but it’s not that great at coding either.
So, if you stumbled upon this article wondering if GPT-4 is worth subscribing to ChatGPT Plus at the cost of $20 a month, then the answer is yes.
The only current downside to GPT-4 other than the speed, is you can’t use it all the time. It’s currently limited to 25 messages every three hours. Which doesn’t sound like a lot, but I use ChatGPT for almost everything and I’ve never hit the cap. Although you will probably hit if you’re generating a ton of text for something like screenplay or a novel.
I see people on the net complaining about this cap and it’s like, are you serious? For $20/month you can talk to what is basically a demi-god. And your biggest gripe is that you can only ask it 25 questions every three hours? Get real. GPT-4 is amazing. It is epic. It is world-changing and it is significantly better than GPT3.5. They could drop the cap to 10 questions every three hours and it would still be worth it.
Thanks for reading and don’t forget to follow us on Twitter.