They have not yet released this to the public.
If you go on the ChatGPT app, as of today, it is still GPT 3.5.
RT:
A new artificial intelligence program can keep up with human beings on a number of professional and academic tests, according to its creator, which said the model scored among the top 10% of test-takers on a simulated bar exam.
Technology firm OpenAI unveiled its latest language model on Tuesday, dubbed GPT-4, saying the program is “more reliable, creative, and able to handle much more nuanced instructions” than its predecessor, GPT-3.5.
“GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks,” it said. “For example, it passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%.”
The AI model also performed at the 93rd percentile on a SAT reading exam and at the 89th percentile on a SAT math test, the company added.
The GPT software has been embedded in a number of other apps, such as the language-learning program Duolingo, which is aiming to create conversational bots, as well as automated tutors for the online education company Khan Academy.
The model’s previous iteration, GPT-3.5, gained popularity in the form of the ChatGPT chatbot program, which is capable of holding complex, human-like conversations with users.
According to OpenAI, GPT-4 is its “most advanced system yet,” and unlike GPT-3.5 is able to process image prompts in addition to text. However, despite its improved capabilities, the company warned the new model is “not fully reliable” and still suffers from some glitches, including what it calls “hallucinations,” in which the AI simply fabricates information or generates erroneous answers.
We discussed ChatGPT and the future of AI on the latest episode of Wang Town. Both Paul Town and I are very excited about the future of this technology, as we agree that it is necessarily good by nature, but like anything, can be used for evil.
Wang Town 3 has dropped, right on time.
This episode:
-What Scott Adams said
-Everything is a computer simulation
-Events are literally fake
-ChatGPTThis is the best episode yet, if I do say so myself (and I do).https://t.co/Jj68eBmus6
— Andrew Anglin (@WorldWarWang) March 12, 2023
A lot of conservatives don’t understand the way the AI works, and on the show, I mentioned that Tim Pool appears to be lying about it on purpose.
There is a “threat” from AI, but it only has to do with the people not getting equal access to it. Elon has been pretty critical of AI in general, talking about abuse, but personally I think we should let the abuse happen. It’s going to happen regardless.
The best thing Elon could do would be to give us an open source alternative that we can train and use to our own ends.
I’ve been saying for years that AI science fiction that focuses on the idea that AI becomes “conscious” is retarded and a distraction. It can’t become conscious, that is some kind of bizarre fantasy for retards. All it is is information in a computer. But it is all the information, organized in the easiest way, meaning it can be a very useful tool for basically anything anyone wants to do, good or evil.
If we had an open source version, it would be better than any closed source version, as everyone on earth would be incentivized to work on it and make it better. It would actually be good for Tesla and Twitter if Elon released an open source version, because those are real companies that would benefit from the developments that open source would spur. It’s a unique situation, where there is literally no practical benefit to keeping the software closed source, other than censorship.
Elon is apparently aware of this on some level. He named “OpenAI” because he wanted it to be open source. He no longer has any investment in the company, however.