Digital Marketing

GPT-4 Is Coming: A Look Into The Future Of AI

Some have said that GPT-4 is “next level” and confusing, but what will the reality be?

CEO Sam Altman answers questions about GPT-4 and the future of artificial intelligence.

Any hints that GPT-4 will be a multimedia AI?

In a podcast interview (Artificial intelligence for the next era) From September 13, 2022, OpenAI CEO Sam Altman discussed the near future of AI technology.

Of particular interest was that he said a multimedia model was in the near future.

Multimedia means the ability to operate in multiple modes, such as text, images, and sounds.

OpenAI interacts with humans through text input. Whether it’s Dall-E or ChatGPT, it’s a completely text-based interaction.

AI with multimedia capabilities can interact through speech. It can listen to commands, provide information, or perform a task.

Altman provided these tantalizing details about what to expect soon:

“I think we’ll have multimedia models in not much longer time, and that will open up new things.

I think people are doing a great job with agents that can use computers to do things for you, use software and the idea of ​​a language interface where you say natural language – what you want in that kind of back and forth dialogue.

You iterate and revise it, and the computer does it for you.

You see some of this with DALL-E and CoPilot on very early roads.”

Altman didn’t specifically say that GPT-4 would be multimedia. But he hinted that he would come within a short period of time.

Of particular interest is that he envisions multimedia AI as a platform for building new business models that are not possible today.

Compare multimedia AI to the mobile platform and how this has opened up opportunities for thousands of new projects and jobs.

Altman said:

“…I think this is going to be a huge trend, very large businesses will be built with this as a front, and in general [I think] That these very powerful models are going to be one of the real new technology platforms, which we haven’t really had since mobile.

And there’s always an explosion of new companies right after that, so that’s going to be great.”

When asked about the next stage of development for AI, he responded with what he said were surefire features.

“I think we’re going to get real multimedia models that work.

So it’s not just text and images, every method you have in one form is able to easily move between things.”

Self-improving AI models?

Something that hasn’t been talked about much is that AI researchers want to create an AI that can learn on its own.

This ability goes beyond automatically understanding how to do things like translate between languages.

The spontaneous ability to do things is called manifestation. It is when new capabilities emerge from the increased amount of training data.

But an AI that learns on its own is another thing entirely that doesn’t rely on huge amounts of training data.

What Altman described is an artificial intelligence that learns and develops its own capabilities.

Moreover, this kind of artificial intelligence goes beyond the release model that the software traditionally follows. The company releases version 3, version 3.5, and so on.

He envisions an AI model that is trained and then learns on its own, growing itself into an improved version.

Altman did not indicate that GPT-4 would have this capability.

He brought this up as something they were aiming for, seemingly something that fell into the realm of distinct possibility.

Explain that AI has the capacity for self-learning:

“I think we’ll have models that are constantly learning.

So right now if you’re using GPT whatever, it’s stuck at the time it was trained. And the more you use it, the better it doesn’t get and all that.

I think we’ll change that.

So I’m really excited about all of that.”

It’s unclear if Altman is talking about artificial general intelligence (AGI), but it kind of sounds like it.

Altman recently exposed the idea that OpenAI has good AI, which is quoted later in this article.

The interviewer asked Altman to explain how all the ideas he was talking about were actual goals and plausible scenarios and not just opinions about what OpenAI would like to do.

The interviewer asked:

“So I think it would be useful to share one thing – because people don’t realize that you’re actually making these strong predictions from a somewhat critical standpoint, not just ‘we can take this hill’…”

Altman explained that all of these things he’s talking about are predictions based on research that allow them to identify a viable path forward to confidently choose the next big project.

He said he shares

“We like to make predictions where we can be on the frontier, and understand predictably what the scaling laws look like (or we’ve already done the research) where we can say, ‘OK, this new thing is going to work and make predictions that way.'”

And that’s how we try to operate OpenAI, which is to do the next thing in front of us when we have high confidence and take 10% of the company to fully explore, which has resulted in huge gains.”

Can OpenAI reach new milestones with GPT-4?

One of the things necessary to drive OpenAI is money and vast amounts of computing resources.

Microsoft has already poured $3 billion into OpenAI, and according to The New York Times, is in talks to invest an additional $10 billion.

The New York Times reported that GPT-4 is expected to be released in the first quarter 2023.

It has been hinted that GPT-4 may have multimedia capabilities, citing venture capitalist Matt McElwain who has knowledge of GPT-4.

The Times reported:

OpenAI is working on a more robust system called GPT-4, which could be released as soon as this quarter, according to Mr. McIlwain and four other people familiar with the effort.

… built using Microsoft’s massive network of computer data centers, the new chatbot could be a system much like ChatGPT that only generates text. Or it can juggle images and text.

Some venture capitalists and Microsoft employees have seen the service in action.

But OpenAI has not yet determined whether the new system will be released with capabilities that include images.

Money follows OpenAI

While OpenAI has not shared details with the public, it has shared details with the project funding community.

Talks are currently under way that would value the company at up to $29 billion.

This is quite an achievement because OpenAI is not currently generating much revenue, and the current economic climate has forced down the valuations of many tech companies.

observer mentioned:

The magazine reports that venture capital firms Thrive Capital and the Founders Fund are among investors interested in buying $300 million worth of OpenAI stock. The deal is structured as a tender offer, with investors purchasing shares from existing shareholders, including employees.”

OpenAI’s high evaluation can be seen as a validation of the future of the technology, and this future is currently GPT-4.

Sam Altman answers questions about GPT-4

Sam Altman was recently interviewed for StrictlyVC, where he confirmed that OpenAI is working on a video model, which sounds amazing but can also lead to serious negative results.

While the video part wasn’t said to be a component of GPT-4, what was interesting and perhaps related to it, is that Altman was certain that OpenAI would not release GPT-4 until it was certain it was secure.

The relevant part of the interview takes place at 4:37 min Mark:

The interviewer asked:

“Can you comment on whether GPT-4 will be released in the first quarter, first half of the year?”

Sam Altman replied:

“It will come up at some point when we are confident we can do it safely and responsibly.

I think in general we’re going to launch the technology a lot slower than people want it to be.

We’ll be sitting on it for a lot longer than people want us to.

In the end, people will be happy with our approach to this.

But at that time I realized that people want the shiny game and that’s frustrating and I totally understand that.”

Twitter is full of rumors that are hard to confirm. One unconfirmed rumor is that it will have 100 trillion parameters (compared to GPT-3’s 175 billion).

This rumor was debunked by Sam Altman on the StrictlyVC interview show, where he also said that OpenAI does not have artificial general intelligence (AGI), which is the ability to learn anything a human can.

Altman commented:

I saw it on Twitter. It’s complete with a —— t.

The GPT rumor mill is a silly thing.

… People beg to be disappointed and they will be disappointed.

… We don’t have an actual AGI and I think that’s kind of what’s expected of us and you know, yeah… We’re going to disappoint those people. “

Many rumors and few facts

The two reliable facts about GPT-4 are that OpenAI has been coded so hard on GPT-4 that the public knows almost nothing, and the other is that OpenAI won’t release a product until it knows it’s secure.

So at this point, it’s hard to say for sure what GPT-4 will look like and what it will be capable of.

But a tweet by technology writer Robert Scoble claims it will be next level and disruptive.

However, Sam Altman cautioned against setting expectations too high.

More resources:

  • Can AI perform SEO? OpenAI’s GPT-3 experiment
  • From Scratch to ChatGPT Hero: How to Harness the Power of Artificial Intelligence in Marketing
  • Why SEO Professionals Need to Master Redirects

Featured image: salarko/Shutterstock

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button