AI Predictions Over the Next 12 Months — To April 2024

Artificial Intelligence development is moving FAST…

…It got me thinking about what we can expect in just 1 year from now.

The thing about technology development is that a technology advancement in one area often speeds up the advancements in other areas — which is why tech growth is exponential.

So, we could be seeing crazy stuff in just 12 months…

Watch the video for more details:

Approximate Transcript:

In this video is about AI over the next 12 months, what do I expect to see what seems likely, and I’m shooting this in April of 2023. So this would be April 2020. For that I would be making these by this time i, this is what I expect to see. So first of all, expect to see an explosion of narrow AI models. Why is this because there you can have faster development time, it’s less, less expensive to develop more data is not necessarily better, you reaching that threshold of data, when you’re when the narrow model is much more narrow is much, much, much easier. Also, in some cases, there’s like low error tolerance. So in the example I’ve given this in a lot of videos, like an AI surgeon, you definitely, that’s almost certainly going to be a specific model, maybe even at first AI models for a specific type of surgery. Or, and then it goes from there. Like for example, like gallbladder surgery, or just for gall bladders, right, and then just for stomach surgeries, or whatever. And then maybe like they they kind of generalize to different types, different areas. So maybe the chest area, the stomach area, the ankle or whatever. Again, these are, these are things that you do not want a little slip up, you don’t want to be calling the GPT four API and have it get a little bit creative with how it does this, at least with regards to some parts of it. And maybe what you do is you there’s certain parts of it that are you for your for your the narrow model. And then if GPT four is like the best general reasoning, if it runs into something that it doesn’t understand it calls to GP for for reasoning. And then the actions it takes are, are done with the different models. So a kind of pointing out another thing here, which is that just because it’s a narrow model, doesn’t mean it works by itself. Sometimes I think you need multiple models that and I think Tesla does this, where they have two different models. And if they don’t agree, then it doesn’t take the action. So they like they have to agree. And so this, this kind of points to several different neuro models that approach things from a different angle. I think, again, I’ve mentioned medical, a lot of times I think this is the primary area, the first area, we will see the most AI development because it’s just the most potential here, there’s just so many things, AI is really, really good with medical with testing out different, suggesting different drugs, because there’s just literally unlimited practical purposes, there’s unlimited amount of like combinations, two ways to put the molecules and form them and, and shape them and do their structure, that it’s not really possible for humans to take to take that job on themselves. And again, this is not an exhaustive list. Like I, I think that if I were to spend another five minutes, I could probably add another five, there’s just a lot. And then legal logistics, data analysis, math and physics, software development, all of these things as the price of an ease of developing a model, and setting it up on Nvidia has their new AI cloud where you can basically get the same CPU GPU type stuff that that open AI uses in terms of the processors, and start small, just like other cloud computing, that’s gonna really make it a lot easier to do some of this stuff. But there’s just, this is a huge predictor, I think that there’ll be just way, way more and some of them extremely, extremely useful, and actually pretty well formed to where they’re, they’re adding a ton of value to society. I think it’s pretty likely it’s if it’s not out by in a year by April 2024. Then, you know, it’ll be out soon. That’s what I would expect, you know, that the time it took from GPT, three to GPT. Four, wow, I just looked it up. I thought it was faster than that. It’s actually it was actually almost almost three years. So more like two years and nine months. So that’s quite a bit of time. Some people have heard, say, GPT. Five, they do seem to be speeding this up open AI. Maybe it’s more like late 2024, maybe even all the way into 2025. It’s hard to say. But I do I do expect that with their success, especially with chat GPT that, that they’re going to be able to put more resources in it. They have more funding that you know, they’re going to get more customers, which should speed everything up and just put things on people’s radar. So maybe maybe if we’re lucky a year from now, we’ll have GPT five it’s I think it’s maybe a coin flip. Maybe it’s even less likely that a year from now, but it’s still is not our Mo possibility. And if it’s out, you know, it’s it scores better than 99% plus people on like all testing, there’s a large context window. We calling this mega modal model, not multi modal model, just because maybe it takes on everything. It’s hard to it’s hard to kind of fathom how good its logic and reasoning would be. Just because it’s really, really good, right, right now, I think we’re gonna run into, we’re gonna run into a lot more. And this is maybe this is actually maybe a separate point from all from it, maybe I should have pulled this out. But it’s going to be really hard, almost impossible to tell online. If some if somebody is like a bot or not, unless you’ve like already met them in person even and then they still could be using AI or it could be not actually that person. So something to think about, will it be AGI I would say it’s very unlikely not to be AGI. And the reason for that is that I think that I think there’s something else that is needed beyond a large language model. Let me give you an example. So the large language model doesn’t have a memory, really, so to speak. And so there’s pieces like that, that I think will be needed to for something to be considered an AGI. But it’ll certainly pass a crapload of tests. And maybe that is something that they add in, you know, there’s no reason they can’t add it in or maybe with like a certain plug in. Or if you connect GPT, five to another piece of software that gives it the memory. That makes sense, then it could actually give us it gives it autonomy or something I don’t know, I’d say it’s unlikely, but it’s going to feel a lot like it or like it’s going to feel like we’re really close. Maybe mid journey version six, they seem to be going this up, you’re going really fast here. So even a year ago, there were two and today they’re at five. So maybe we’re actually looking at version six to eight, something that maybe maybe six to 10. Right now, this is probably the best AI just kind of general art, maybe there’s some specific models that will come up for very narrow specifics. This is when I put in into mid year and this is one of its outputs when I put it just like mid journey. And because it’s already amazing photorealistic I think this goes back to kind of the bottom knot, which is that there’s going to be a reality slippage. Recently, there was the photo that went viral of the of the Pope and a puffy coat that was fake. Expect a whole lot more of that. I mean, it’s going to be really, really, it’s kind of crazy, because it’s super, super good. Right now, it’s hard to kind of fathom how good it will be right now. You know, and also I wonder about like companies that are people that are dependent on stock photos for their for their income, I feel bad for them, because I just don’t know why I would buy stock photos, if I can make as many as I want very easily. Not that I was buying stock photos in the first place. But some people do, certainly. And really, you know, there’s just gonna be like a billion new AI tools, so it’s just gonna be so many people. So when ChaCha beauty came out it put a lot of it put it on a lot of people’s radars more it really said, Oh my god, we’re here. And while chat GPT was not perfect, it was still amazing. And GPT four is kind of nuts. And you know, they’re already they’re also open AI is not just releasing like that. But like GBD for but you know, the whisper, it’s going to get better the their dolly is gonna get better. Whisper I think right now is just text camera, if it’s audio to text or text to audio, but it only goes one way. I think they’ll go two ways. And people don’t don’t quote me on that. But also, people using open AI is AI API connecting into their software to create all sorts of different awesome tools that are GPT for powered. That
is really awesome. You know, a lot of what’s going on right now is people just kind of doing the same thing in different places. And I don’t think that’s very, and sometimes it’s useful, but in general, it’s not very useful. So they’re just like, you know, chat GBT in the web browser. Whoo. Okay, so I don’t have to like, go over here, I just go over here. And it’s not really it’s not really adding a lot of value. But there’ll be expected to be some people who do a little bit more creative things. Hopefully, that’s me, that creates a lot of value, and actual uniqueness from these unique things that come up. There’ll be new large language models, new companies that try to go try to compete with GPT for but also, as I pointed out before, you know, a large language model maybe for doctors or something, or there’s all sorts of different for specific large language models. I expect a lot of these new text to image maybe we could just say text to video because right now, there is a little bit of this, but it’s pretty bad. I expect that this will actually be very different and much better a year from now. Audio, we already do this pretty well. But you know, like with a voice cloning. It’s not quite perfect, it’s really, really close, maybe the voice cloning will perfect voice cloning. That’s kind of crazy. So I’m going to tell us, again, this is part of the reality slippage, which is that it’s going to be really hard to tell. I mean, you could have a deep fake, of politician saying something that looks and sounds exactly like that person. And, but it’s not real, this is going to happen. And it’s actually this is probably one of the biggest downsides of AI in general. If you look at the effects of social media on things, social media did some really nice things for society. But in general, there’s a really dark side, which is that it allows for spreading of misinformation much easier than before. You know, you know, when internet first came out, everybody gets information. It’s democratizing the information. Well, and to some degree that happened, we had like the Arab Spring, which was nice, but when bad actors figured out how to manipulate the people, they went up, they were hit that stuff hard. And there’s a lot of that going on right now. And unfortunately, it’s going to get worse, before it gets better. Probably, hopefully, somebody can come up with tools to fight these reality slippage to be to be able to identify voice cloning, or fake or AI generated photos via AI. So it’s gonna be aI protecting us against AI, it’s definitely happening going to happen. At least people should try that. And we’ll try that. And I expect that there’ll be some amount of success for some things, maybe some things that are, it’s not, it’s not so easy to do that. You will also have, like, integrated vs. Paschal tools, so a lot of old tools that you’re familiar with will try to integrate AI into them. And by integrate, you know, I mean, actually kind of mix in with the whole feature set and deep inside the software, or, or whatever product it is, versus like just patching it on the do see a lot of patching on go get chat GBT in our thing, or, you know, it’ll, it’ll write in an email for you, it’s, I don’t find that to be very useful. And I expect a lot of that to continue to go on. So a lot of tools will just, they’ll just do that. And that’s it. Leave it at that. And they’ll be like, we’re AI powered, when they’re really it’s really not, it’s just kind of like adding a patch on to something old. What are the odds of AGI? So first of all, we have to really, this is this is a fuzzy definition, different people, you can ask the different people in the group to 10, slightly different answers. My thought on this would be kind of an autonomous. I think somebody else called it, David Shapiro called it an ace, autonomous cognitive entity. I think there’s a pretty good definition, a little bit more clear that I think, artificial general intelligence. So this is an entity that is autonomous, and intelligent, and can act like it’s not necessarily that it is a sentient being, but can act like a sentient being. And that, you know, one of the things that I mentioned before that GPT, four GB five, don’t have, or at least that I’m aware of, is a long term memory to where they are, and then also sort of like a long term context. Maybe Maybe they do have a little bit of long term context, if they’re trained on so much data, but they don’t have a long term memory in terms of like what they’ve actually experienced, which is, I think, pretty critical to creating the ace. And there’s several other pieces to this puzzle, I think. So my odds would be low ish. It’s not zero. And then also, I think, maybe if we get like a really fast GPT, five, or maybe a GPT 4.5. Plus somebody else plugs it into some of the other pieces. Maybe they plug it into, maybe there’s like a GPT, let’s say GPT 4.5 plus x plus y. So there’s two more pieces, somebody else takes the GPT 4.5 API, plugs it in and actually creates something that could be considered an ace. This is very possible. So or maybe it’s GPT. Maybe it’s open AI that does this. So let me know what you think. Let me know what you think will happen over the next year. Please leave a comment. Like if you’d like to subscribe if you want some more like this. Thank you very much and have a great day. Bye.