设为首页 - 加入收藏  
您的当前位置:首页 >焦點 >【】 正文

【】

来源:眼花耳熱網编辑:焦點时间:2024-12-04 01:09:41

Just when you started coming to terms with ChatGPT's eerie capabilities, OpenAI dropped a new version of its AI language model.

OpenAI says GPT-4 is much more advanced than GPT-3, which powers ChatGPT. And to prove it, they made GPT-4 sit down for a bunch of exams. OpenAI tested GPT-4 with a variety of standardized tests from high school to graduate to professional level and spanning across mathematics, science, coding, history, literature, and even the one you take to become a sommelier. The exams were comprised of multiple choice and free-response question and GPT-4 was scored using the standard methodology for each exam.

SEE ALSO:How to get access to GPT-4 right now

Put your pencil down, GPT-4, it's time to see check your scores.

What, like law school is hard?

GPT-4 didn't just get into law school, it passed the bar. The AI language model scored in the 88th percentile on the LSATs (Law School Admission Test) and did even better on the Bar (Uniform Bar Exam) by scoring in the 90th percentile. By comparison, GPT-3 was in the bottom 40 percent of the LSATs and 10 percent on the Bar.

Mashable Games

College admissions tests were a piece of cake

GPT-4 took both the math and reading/writing sections of the SATs and all three sections of the GREs which are broken down into quantitative, verbal, and writing skills. It scored in the 80th or 90th percentile of all sections except for the writing section of the GREs... which it kind of bombed in the 54th percentile.

Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!

The quintessential overachiever, GPT-4 also took allthe AP (Advanced Placement) high school exams. It aced most of them, scoring between the 84th and 100th, except for a few outliers.

GPT-4 scored 44th in AP English Language and a measly 22nd in AP English Literature. So all you wordsmiths out there might have some more time before GPT-4 replaces you. GPT-4 didn't do so hot on AP Calculus BC scoring between 43rd and 59th, proving that even for a supercomputer, calculus is not easy. But that still earns GPT-4 a four, so it might still place out of college calculus.

GPT-4 has some coding work to do

GPT-4 still has some work to do with its coding skills, which is curious since one of its marketed uses is for helping developers. Its rating for Codeforces, which hosts competitive programming events, is 392, which puts it way down in the Newbie category of anything below 1199.


Related Stories
  • OpenAI announces GPT-4
  • Getting a ChatGPT at capacity error? Tips on how to get past it
  • Grammarly introduces a ChatGPT-style AI tool for writing and editing
  • OpenAI is making ChatGPT and Whisper available to third-parties
  • How ChatGPT and AI are affecting the literary world

It did pretty well on the easy level of the Leetcode (31 out of 41 problems solved) but struggled when it came to medium or hard level of difficulty (21/80 and 3/45 respectively). As we saw in the developer demo livestream, GPT-4 is fully capable of writing Python, but required some manual tweaking to set the right parameters, which might explain some these test scores. Or maybe it didn't eat breakfast that morning.

Ok, but can GPT-4 become a sommelier?

GPT-4 passed the sommelier exams with flying colors. It placed lowest (77th percentile) in the most advanced sommelier exam. But for a non-human entity that's never tasted wine, we'll let that one slide.

OpenAI has released a full breakdown of how GPT-4 performed. GPT-4 might not write the next great American novel...yet, but GPT-4's future as a mathematically brilliant lawyer and wine connoisseur looks pretty bright.

TopicsArtificial IntelligenceChatGPT

热门文章

    0.148s , 10378.3984375 kb

    Copyright © 2024 Powered by 【】,眼花耳熱網  

    sitemap

    Top