Blog

The Implications of Large Language Models (LLMs) Hitting the Wall

Recently, Sam Altman said, “The Age of Giant AI Models is Over.”

What he meant by that was, “our strategy to improve AI models by making them much bigger is providing diminishing returns.”

So, I thought it would be interesting to explore if LLM’s hit the wall and have improvements dramatically slowed.

Approximate Transcript:
Hi, this video is about large language models MLMs hitting the wall and the implications of that. In case you haven’t heard, I shot a separate video about this. But Sam Altman recently stated that the age of giant models is over, which I think is a bit misleading. Basically, what he was saying was, you can’t improve any more by just adding more data and more parameters. And this makes sense. And this is something that some people have predicted was coming, because GPT, four just capture so much of the data. They didn’t release it. But if you look at like the GPT, two had 1.5 billion parameters, which is sort of like the amount of neurons or the amount of different kinds of factors that it considers GPT. Three had 1.7 170 5 billion. We don’t know how many GPT. Four had has, they didn’t release that. But estimates are that it’s a leap over GPT. Three. And that also, that potentially, they’re kind of out of data. Now more data is being created every day. So it’s really they’re out of data completely, but perhaps just there’s not enough to get like that exponential leap. But also, I think he implied and this makes sense that sometimes more data just isn’t necessarily better, doesn’t necessarily give you an a better answer to get more data. And I elaborate on that again, in my in my recent other video. So you know, let’s assume for the sake of argument that that large language models and opening I included, hit a huge wall, and they are maybe not unable to move forward, but their progress has slowed dramatically. And we don’t see anything like what people think maybe GPT, five should be for five or 10 years, that maybe there’s another technological development that needs to happen. So what comes about because of this, let’s look at the good. I think probably the biggest thing is for the world to kind of catch up mentally on unlike, you know, especially when it comes to misinformation being spread, and identifying that and helping people adjust to that new reality that we’re finding ourselves in right now, this year 2023, that’s probably the only good thing I can think of maybe the pause, the people who were in favor of a pause is just kind of happens naturally. I personally don’t think that the pause is a good idea. And you know, there’s three dots here, because I don’t really see a whole lot of good coming from this, I’m sure that there’s plenty of people that will be celebrating this, if this is the case, I will not be one of them. The bad, here’s here’s what I would say with the bad good tech is slow down, there’s a lot of really good use cases that really dramatically can help people’s lives that is coming about because of the AI models. And now maybe this in some cases, this doesn’t affect that in some cases, it likely will. So you know, just to give an example, there’s a bunch of different stuff with regards to health care, you know, saving lives, curing diseases that that AI is actually has already shown to be quite proficient at and moving forward rapidly. So perhaps that slows down to me, that’s bad. I think there’s also an argument to be made for this could actually be better for bad actors. And the reason for that is that I think that opening I’m moving forward will actually help tamp down the bad AI models, as they have demonstrated to me pretty thoroughly, that they do have good intentions. And that if there was a bad model that that GPT for GPT, five could help kind of tamp down, identify, fight back against that they would work on that and help with that. And so I think that this actually opens the door for bad actors. And it’ll it’ll make sense when I get to this last bullet point. Let’s look at like, kind of, like how good is GPT for right now. And I would say that it’s really freakin good. Like, I was trying to test the other day like, you know, it’s supposed to be bad at math. And it actually did a pretty good job of math and showing its work. And it got it right. Not like a super complicated thing. But more complicated than what you know, other people were saying it was, it was it was wrong. And I need to add the hallucinations here. So but there are still some things that it struggles with math, as we mentioned before recent events, hallucinations, I think that there’s some more if you want put put them in the in the comments below if you have any other ideas, but it still struggles with some things, but not a whole lot. It does a whole lot really, really, really well. So you know, I think right now, it’s actually at a point that is pretty profound, just GPT four as it is now. Now. So Sam Altman did state that there are other ways in which they are looking to improve it, and I believe I believe them. And but maybe it’s just slower. Let’s assume for the case of this argument that it’s slower. It’s just kind of more minor updates that come together more further down the line in terms of years to create a more complex hints of bigger change, which is kind of what they said, they did say that a lot of their improvements, were just a bunch of little ones that kind of worked all work together, or to create where a whole is greater than the sum of the parts. How much can it really improve? Now, the plugins, the chat GPT, plugins, actually does have a lot of potential to shore up the weaknesses. Specifically, I shot a video on Wolfram Alpha and math, if you know those things work well together, and it worked pretty good, then then that’s a huge weakness, recent events, there is some way around this, to connect it to the internet, to some degree, where to pull information from the Internet, put it in your own database, that’s, that’s very recent, I do think that that will be helpful, I’m not sure about the hallucinations. Whether or not like plugins are really probably not going to really help with that. This is probably one of their biggest challenges the hallucinations. And it’s a it’s a real, it’s a real issue that needs that reduces the value of GPT. Four. So I mentioned that they’re working on it pretty hard. And I’m optimistic that they’ll be able to solve it. But you know, who knows, it might take five years before they’re like, alright, we’re, you know, it rarely, if ever gives hallucinations. Longer context windows, you know, the, so they increased from GPT, three to GPT, for technically, the context window by quite a bit, at least, like the maximum of 32,000 tokens versus I believe it was 4000 tokens. So that’s an 8x increase, which is an order of magnitude that’s pretty substantial. You know, I don’t think that this is really necessary. I know that some people were like, oh, you know, I could, if we had even more with 32,000 tokens, I think that’s like 5200 pages of content. You know, if you had even more, you could just put a whole book in there. But the problem with that is that more data is not necessarily better. And I think you get diminishing returns, and you you kind of watered down the things that you want to see if you have these huge context windows, and you dump massive amounts of data in there. So I don’t necessarily think this is a big improvement, I think the context window right now is quite large. And there’s always going to be a limit to it. And so, you know, this is a problem that developers and people are going to have to deal with that is being worked on. And I think there are there are solutions for it. Recent data, I think this is something that a plugin would be able to help with significantly. And I do think that there’s, well, I mean, they seem very resistant to adding a new dataset, maybe it’s because they spent so much time and money and energy training GPT four on that data set that they had, and recreating that dataset and retraining it again, might cost them, you know, hundreds of millions of dollars. So it’s possible that they’ll just kind of look to Band Aid it with plugins. And but that still is to me kind of a band aid, and maybe that there’s some way, you know, being does have sort of like use your use current search results, I think that’s helpful. And potentially, I think that they have in jeopardy for also something where I can actually go to a website and use that as a reference. So that’s really helpful. Again, I think it’s a little bit of a bandaid. So, but I don’t think this is a huge issue, because I think that, you know, there’s you just just knowing this limitation means that you can use GPT for just fine for pretty much all use cases, almost all use cases, you know, barring the one, to me the biggest issue, which is the hallucinations. Alright, so the biggest area of opportunity for AI, even with this, even if, even if GPT four is the exact same level of quality five years from now, there’s still a crapload of opportunity. For, I would say it’s, it’s it’s necessarily business opportunities, just kind of human opportunity, although I think the context of business makes a lot of sense. And it’s what are called as narrow AI models. And these are models that are made for specific situations, I see a lot of models out there that are broad general models that that are trying to, you know, be AGI they’re trying to be a generalized intelligence. And well, that’s great work. And that’s really, really helpful. I think that there’s just so much value you can get, by narrowing the focus of a model of an AI model to a specific use case or set of use cases that target a specific market. It’s way less expensive to train, you can get higher quality results for way less parameters. And so you know, I think also you can even take consider taking these narrow models and making them large language model size and the quality that you might get from that I’m not sure you really need to do that in a lot of cases. But these narrow models are going to, I think shine over the next two or three years regardless of what happens with open AI and GPT for GPT five.
I mean, there’s so many use cases still to be developed. Think about how many different pieces of software are being used right now. And every single you know, that’s the amount of quantity of possibilities for narrow AI models at least and potentially more, because there’s so many different use cases. And there’s so many different ways, especially with the the low cost based models that you can take right now that are being published out there that are open source that cost $500 $600 to build and train maybe even less and still be really really good as a general models, you take that you kind of train it a bit more for your narrow specific case. And boom, you have a very, very powerful model for a very specific use case. I think this is where a lot of the AI investments should go and and I think that the people who do that are going to be rewarded greatly. I plan on doing that and more on that another time. So thank you for watching. If you liked this, please like and subscribe for more videos and have a great day. Bye


“The Age of Giant AI Models is Already Over” says Sam Altman, CEO of OpenAI

This statement by Sam Altman is provocative…

…there seems to be an implication that giant AI models are no longer useful…

…but this is not what Sam means.

Approximate Transcript:

Hi, this video is about something that sounds really profound that Sam Altman said, recently, the open ai ai CEO, he said that the age of giant AI models is already over. I think this statement is taken out of context is a bit misleading, because to me, and I saw a smaller, kind of a smaller headline that I clicked on that made it seem even more salacious is kind of is he saying that it’s just like, chat tivities done?
Like, it’s not good anymore? That’s not what he’s saying. That’s kind of what I would my first reading of it.
It’s like, oh, we’re not going to use them anymore. No, he’s, they’re going to use the large language models. What he really means by this is, is that they can’t make they can’t really grow the improvement of them by making them bigger. That’s, that’s the short answer, there’s a little bit more context I want to add as well, which is that this is this has been the philosophy of open AI, from the beginning. And for quite some time, there’s a, you know, there. ndarray, Carpathia, very famous in the AI world, I believe he was the head of AI at Tesla. And then I think he’s actually at open AI. And now I remember, I’ve watched several of his videos, and one of the things that he talked about, was that, number one, the code for these AI models over the last basically, since 2017, when Google released their transformers paper, the code is very short. And it really hasn’t changed a whole lot. It’s like, I think 500 lines, which for code is very, very small. And then he talks about sort of like the, I believe it was him that the the strategy, the way to improve it is just make it bigger, you know, just keep making it bigger, add more parameters. And parameters are sort of like neurons. And to give context that they show it here in this article of GPT. Two had 1.5 billion parameters. This is funny tag line to be generated by artificial intelligence, I wonder if this is like an AI movie, or series about AI? Anyway, 1.5 billion, and then GPT, three 1.7 5 billion parameters, and it made it way, way better. And that was a large reason for the improvement. And then GPT, four, they didn’t announce how many parameters there already it but it’s supposed to be much bigger. And so what he’s saying is by adding more parameters or neurons, it’s not going to improve the model, there’s diminishing returns in that area. And up to some point, this is going to not give you more, I think another way of looking at this is also more data, it doesn’t necessarily add improvements to the quality of, of the of the model, but just in general, from a standpoint of like data analysis, more data isn’t always better, doesn’t always improve things. And, you know, just real quick aside, if you think well, why should I believe you about data, basically, for the last 20 years, data has, I’ve done data from a theoretical and from a practical standpoint, you know, I have a master’s degree in Industrial Engineering, which is closer to actually data science than it is than it is engineering. And it’s worth a lot of lots of statistics and analysis of huge, weird datasets. And then, you know, I worked at a semiconductor factory where there was, there’s a lot of complicated data, you know, spreadsheets with 10s, of 1000s of rows and dozens of columns. And, and I’ve worked there for about six years. And then for the last 11 years, I’ve done SEO, which is another kind of like practical data analysis, this is very different than the semiconductor, but still, more data. So I’ve been studying data, it’s been my jam for a very long time. And it makes sense, sometimes more data doesn’t add a clearer picture to the situation. And so this in they have talked about actually, this shouldn’t come as a surprise, even though the headline is kind of like, whoa, this shouldn’t come as a surprise, because this has been talked about for a while that number one, they’re going to run out of data to crawl. And that’s not entirely accurate. Because more data is being created every day, more and more in that rate of increase, that rate of new data is increasing over time. But it certainly hasn’t been increasing at the rate at which they have increased their models. But additionally, it doesn’t necessarily help again, help kind of clarify the situation. I think I’ve got a reasonable analogy. It’s sort of like imagine you’re trying to draw like a 3d picture. And you put in your first button and you can only do with dots, you put it in with a handful dots. And you can see like that line of, you know, a guy on a motorcycle so you kind of know what it is. And then you put in a bunch more dots and you get a lot more clarity. You can see more here His facial expression, and you can see that he’s got like a bandage on his leg or whatever. And then you put in more dots, and you get a very clear picture. Now, when you add more dots to the, to the picture to the dataset, there’s no additional clarity, or it’s very minor, the clarity that is added to the situation. And I think this, this kind of metaphor works for, for the, how they’re dealing with the data and the parameters of, you know, GPT, four and beyond. Because, you know, it does a lot of things really well right now. And adding more data doesn’t necessarily improve that it can actually take it back. But also, it’s, you know, F with each new date, you know, as you grow the dataset, let’s say you go from here to here, the it’s a smaller percentage of the total that what you add. And so when you add more and more, it’s just kind of, you know, it’s getting close to like a kind of a baseline and adding something doesn’t really, it’s like a drop in the bucket in the ocean, there’s only so much more, there’s only so, so big that it can get so many conclusions that it can can really be taken from the data at some point. But also, there’s a, there’s a flip side of this, which is that sometimes actually more data can be bad. Because it’s not necessarily just about raw data. It’s also about the right data in processing the data and interpreting the data. So you could actually potentially have a smaller model that’s better than GPT. For that is definitely possible. And I think that they’ll, they’ll get there. So what does he say? He says, that will make it better in other ways. And this shouldn’t come as a surprise to if you’ve been listening to him, I do recommend there’s that Lex Friedman interview is two and a half hours with Sam Walton very to me riveting, hopefully, to you as well, where he kind of alluded to this already. And there’s been a lot of talk about how like, they’re going to run out of data with GPG for GPT, four somewhere in that in that time. And so this is this is not surprising. But there is a big implication here, which is that maybe this takes you know, because they’ve been adding, they’ve been making the model better in other ways than adding data. But the main thrust of where the improvement was coming from was just more data. So this might actually substantially slow down the development of the the AI models, because now they’re gonna have to find new ways to improve it. And it might take another five or 10 years for them to find that new way. Or maybe there’s the GPT four, which is pretty excellent. By the way, it can only make minor changes for quite some time, minor improvements. It is a little bit disconcerting to think that maybe there there’s actually they’re at a wall like now, like it’s already there, that is possibly why he’s saying this, he’s alluding to what’s happening in the company that they’re realizing Holy shit, like this thing isn’t improving we anymore, or it’s improving very marginally for a huge cost, which you’re saying, you know, building GPG, for cost over 100 million. And I think if you listen to how he says it, it’s like, well over 100 million. So you know, the idea of of building GPT, five, and just that much bigger, you know, and cost over a billion dollars, maybe more, if they could even do it. And they might not even be able to do it right now. This also implies to some degree that, that, you know, like when people talk about AGI and the speed of change recently, that might actually slow down quite a bit. And that we might still be quite a bit far out from a super intelligent AI. And just in general, that certain types of like broad can do all things type of AI models. Maybe our app will be after limits for quite some time. The good news about this is I still think that if you’re looking to like be in the AI space, that there is a lot of different opportunity with AI even without this, by doing what I would consider what I think people are calling narrow models, which is basically the use case for them is narrowed down, which means that you need way less data to get a good result because the situations are are much leaner, much, much smaller, and much more controllable. And in that way you can there’s still a lot of room to grow. Because if you had a GPT for size model for something that let’s say was, let’s say an AI surgeon, I don’t know I’m just putting that out there then then that could probably really really freaking amazing way better than GPT four is for any one specific thing. So the conclusion from that is basically that even if open API’s development stalls and we don’t see GPT five for like seven years, that that doesn’t mean that the AI space is like stuck, that there’s not more that can be done. I think it more implies for like some of the big ambitious but broad super AGI type things that they might actually be further away. Because we might need a new technological development, we might need something new to come along something that’s not a transformer or maybe it’s like a next level transformer. Or maybe it’s like another piece of technology that connects into the transformer and supports it and amplifies it or something like that. There’s a lot of different possibilities. But the problem is that they don’t know it. So this strategy has kind of come to end that that’s that’s what he’s saying with
this. When he’s saying it’s come to an end he said it really should say it’s probably taken out of context, it really shouldn’t say that the the strategy of building bigger and bigger models is over for him for opening it. Maybe not for other companies, but because that was the strategy that has gotten them to where they are today. Anyway, thank you for watching. Let me know if you have any comments if you think if you agree with me or disagree with me, I’ll put a link to this in the comments. Like if you liked this video and subscribe for more awesome AI videos. Thanks. Have a great day. Bye


Cool New Midjourney Feature

Midjourney is the best image AI model as of April 2023.  It produces, hands down, the most photo realistic images.

The usability of Midjourney…leaves a lot of room for improvement.

“Permutations” is a big improvement that makes it much faster for the user to generate a lot of different images with one command.

Check it out:

Approximate Transcript:

Hi, this video is about a really cool new feature in mid journey that is really awesome. And I’ve started using the journey a lot more recently, you might notice in like the thumbnails, I almost always start with, with mid journey. And almost all the graphics I use in here mid journey. And usually when I go in to do that, it makes sense that I kind of want to create different variations, I have a few different ideas for how things can work out or what different types of features that I want. And maybe I wanted this, I want to look at two or three different styles. And it just creates a lot of extra work when you’re trying to do a bunch. So now what we’ve got here is if we look at this, within each curly brackets are the different options, when you separate them by commas is actually quite a few different options. We have four here, we have four here, and we have four here. So this will be four cubed. So this should be 64 different options. So if I hit Enter, have too many prompts. The limit is 40. Okay, well news to me. So let’s see, what we’ll do is we’ll copy in here, and we’ll take away let’s take away robots. And we’ll take away one more of a nature theme here. Okay, so that should get us under the There we go. So we Yes, and it’ll just start firing them out. Now I have the I have the highest version of of a mid journey. And so I think it happens a lot faster than for some people but uh, this is a really, really cool feature to just get a bunch of different variety in your art really, really quickly. And then see which ones work the best for you really, really awesome feature. I always love when software companies invest in making things faster and more efficient, more productive for their users. So thank you very much mid journey for this. And if you’re already using the journey, definitely start using this and I’m sure that you’ll find it immediately very useful and very simple to use. So thank you very much like this video if you liked it, subscribe if you want more awesome AI content. Hope you have a great day. Bye


OpenAI Not Working on GPT-5?

Sam Altman, CEO of OpenAI, made some interesting comments recently about GPT-5.

It seems they are being interpreted heavily and it seems to me that some are reading a bit too much into the comments…

…so, I decided to do my own reading into the comments lol.

Approximate Transcription:

Hi, this video is about what Sam Altman said about GPT. Five and what some of the reactions are to it. And then some interpretations on what what it really means for GPT five going forward. So there’s a video and I’ll put a link to the video in this tweet, and then an article. So you can read it all if you want. And watch this quick video where Sam Altman calls into this Lex Friedman’s event. And he says, We are not currently training GPT five, we’re working on doing more things with GPT four. So I watched another video where someone said that this means meant that they’re not working on GPT. Five, that’s not the same thing to me. Because you can work on the algorithm or the model without training it. I think although the the logic, I know that the code for it is supposed to be pretty simple. So maybe there’s not a lot of work to be done there. Or maybe they are kind of working on it by also working on GPD. Four by solving things with GPT. Four, those are solutions that they can take and apply to GPT. Five, I don’t think this pushes the timeline out. I don’t think this is should be interpreted as they’re they’re trying to put a pause on things to let you know to hear keyed the call of those other people from about a month ago. And actually Lex Friedman. I’m sorry. Sam Altman comments on that in this video, he says basically something to the effect of like, hey, they have some valid points. But then there’s some other things like that he thinks that the it’s technically not not not very accurate. I still think we’re on track for like a GPT 4.5, excuse me, sort of COVID GPT 4.5. In in may be late this year, early next year, and then GPT, five, maybe about two years out. That’s based upon just the history obviously, that’s that’s a wild guess. But some people seem to also be interpreting this as either a lie, I don’t think it’s a lie, have found Sam Altman to be extremely straightforward in every single thing. And I’ve watched a lot of his stuff in terms of, you know, just telling it like he sees it and calling it like he sees it and not he’s, you know, not he’s kind of political about it. But he’s not afraid to say I disagree with someone, or this is what we want to do. And so, you know, I think that this isn’t him trying to be like, Oh, we’re not really working on it when they are. But the fact that they’re not training it is isn’t too surprising. It isn’t like big news, and I don’t really doubt him. I’m genuinely curious. What do you think? Do you think he’s, look at this poll right here on Twitter that a lot of people doubt him? Do you doubt him? Do you think it’s true? What do you think in terms of what this means? Because I’m very curious what you what you have to say. Well, anyway, this is a quick update. If you liked this video, like and subscribe for more awesome AI videos. Thanks. Have a great day. Bye.


Artificial Intelligence Business Opportunities

The latest developments in artificial intelligence has created countless new business opportunities.

In the video below, I explore some of the angles and vectors that I think are getting overlooked in this field.

Approximate Transcript:

This video is about AI business opportunities. Basically, it’s obvious to many people, maybe some people less obvious, but to a lot of people AI where we’re at with AI right now what has just happened in just the last year or two, and what is likely to happen in the next year or two, there are major, major new opportunities for people to either start new businesses or add on to their addition their current business or to make themselves and it’s in a lot of ways a lot of these things apply to people who work for other people, but, but want to continue to be valuable to their company or to other companies that go forward. I mean, there’s really a massive abundance of opportunities here. If you’re setting out to create a new business or add part at something to your business, it’s actually more about narrowing your focus than it is about like, is there enough opportunity out there, there’s just so much this AI is going to change a lot because going to touch every single industry very, very quickly, some industries sooner and faster and more initially, but ultimately, it will touch a lot, a lot of things. This is not like Bitcoin, this is not like blockchain. This is not even like a like even the cell phone opportunities. There’s so much more, I think it’s on par with the internet, Bill Gates said that there’s only one thing that he that was transformative that he’s seen in his 40 years. And that was actually the graphical user interface. He didn’t even mention the internet. You know, to be clear, like all of these things needed to happen in order for this to work. But here we are, and AI is really, really, really amazing. So you know, I mean, you kind of do your research. And here’s one of my major recommendations, you know, do a little bit of research, do some research, and then pick a path and pick a good to excellent path, and stick to it and just just hit it and stay focused, you’ll hit a wall, keep going and stick with it to some degree. Now, that doesn’t mean you don’t pivot, but it just means that you keep you don’t stop completely or completely abandon everything with the first hurdle that you see. And also this is about not picking, not searching for the perfect opportunity. Because you don’t need the perfect opportunity. Also, it just puts you on an endless quest of searching, or all you’re doing is searching, searching, searching, searching and researching, researching, researching, you don’t actually do anything and you miss the opportunity. Also, it means what a lot of times what happens with this kind of thing is you start one thing, you go halfway into it, and then you see what you think is a better opportunity, you stop that thing. And the first thing and then you go to the second one, and then you rinse and repeat. And you end up after a years with multiple half made business opportunities. So definitely don’t don’t recommend that first area, I would say is with software development. There are new software capabilities that have come about just in the last couple of years by calling the AI API’s, for example, text summarization, you couldn’t really do this in any sort of efficient way, unless maybe you have like a really specific set of niche specific text. Yet AI can do this very inexpensively. And and this is actually extremely valuable. And something that you can do within software that actually is can is very useful. And there’s a host of other things like this, that that the software can do now that it couldn’t do before, in some cases, even just a year ago, or like more precisely in some cases, like it could do it a year or two ago, but the solution was really bad or really expensive. And that that has improved dramatically since then. So it’s more practical, you effectively couldn’t do it. But theoretically, you could do it. That’s kind of what I’m talking about. Also, I think that there’s some room, a lot of room for what I would call AI winter independent solutions. So this is sort of like if openeye wins. If Google wins if and video wins, if some other new company that we’ve never heard of yet wins the AI war, your solution still works. And this is one way to go about it to where, you know, you’re not dependent too much on a single provider that said, like, you know, you could be using open eyes API and then switch to somebody else’s API in the future, should you need to do that. So that’s not necessarily like, just because you’re kind of dependent on one. But this is more about talking about like, tools that use a variety of different types of API’s. But that could also allow users to pick between them, between different ones sort of will make more sense if you go down that path. And then also there’s a lot of niche or use case specific solutions. So you know, I mentioned in another video about like AGI versus AGI isn’t and I think in a few other videos, you know, they’re they’re just because there’s a super intelligent, artificial general intelligence. That is more are intelligent 1000 times more intelligent than the smartest human being ever, does not mean that it will do everything, and does not mean it will even do do everything well, and that there’s a lot of room, especially for smaller competitors to come in. So, pick a specific line, pick a specific set, and really focus on on that specific set on solving that specific problem really, really well, and catering to that you might think well, but I want, you know, I want more, I want to have more opportunity. That’s not really how it works. In business, you know, you can’t be all things to all people. And so trying to do so is a recipe for failure. And so I think kind of the more that you can narrow this down, the more, the more value, you’re gonna be able to build faster. And the the more value ultimately that that you’re able to provide, there’s tons of 100 million dollar plus companies that are extremely narrowly focused. And in an AI, there’s gonna be lots of opportunity for that. I don’t have like a list of all the tools, there’s just so many of a few tools here. But I think like you integrating tools into your business, or seeing how a tool makes a certain business process, like 100 times more efficient or 100 times better, that can lead to whole new businesses that are kind of like classic businesses. But with a with a fresh face, so to speak, like chat GPT and the plugins in their mid journey. I mean, you could have whole art stuff based upon modernity, it’s so good. version five is really, really good, GitHub copilot, you know, integrating that into your software development. And there’s so many more, and there’s so many coming out like every day. I think another thing here, it’s a little bit less about like the business opportunities, although it does kind of open up business opportunities for you. But also, this is good. If you have like a job, which is a commitment to skill building right now, I think that’s one of the best things so that you can be aware of and understand these tools. And so that you can know what really is possible with these tools. You know, even if you have a job right now, there still could be some really great business opportunities in following kind of what I’m suggesting here, can also make you a lot more valuable to your company. So if you think about, like, let’s say AI is going to reduce jobs by 50%, which there’s all sorts of rough estimates like that by whatever your who is going to go first. And who is going to be left who’s going to be left are the people who understand their industry and how it integrates into AI. So if you start studying AI, but also study how and think about and brainstorm how it affects your industry, or your specific company, or whatever, that’s going to make you way more valuable. And it’s going to potentially give you like if you if you come up with a really good idea, and your company that you’re working for in your industry is like I don’t want we don’t want to do that, then you can be like See you later. And then you can go do your own thing. So you can kind of learn and think and process and understand on their dime. And then if they don’t want to, to reward you for something that would be really beneficial to them, then go do your own thing. You know, I think that just kind of understanding AI in general. And kind of how it integrates into your industry is going to be extremely valuable, you know, just you know, built even if you just take AI completely out of the picture, trying to be again, like the the fit the best 50% are going to be left. And so if you’re a top performer in your industry, then it makes you a lot stickier. And it’ll make it a lot easier for you to understand how AI can integrate into there. And then there’s going to be industry specific AI solutions, you can keep an eye out for a big problem in your industry that you think that AI the generative AI can fix. That’s going to be a good way to go. I will quickly allude to my first AI project that’s coming out very shortly, sometime in April 2023. I hope it will ship the first the first version of it. And it’s sort of like an AI winter independent solution. And let’s see if I could read the the USP of it, or at least my current working one. This is for frequent chat GPT users AI enthusiast and prompt engineers. The software will enable an organized, scalable, searchable and systematic approach to managing a model inputs and outputs in text, image and audio all in one conversation. Amplify and streamline your work by discovering organizing, filtering, searching, sharing and building your prompts more effectively invite your friends and colleagues to join in on your AI conversations and art, easily create templates for prompts to build off of and quickly refer back to later. So this is sort of like a prompt engineering tool. That should be ready soon. I’ll show if you’re interested, let me know. Leave me a message or shoot me a message or whatever. And I’ll be releasing some specifics on this really soon. I think it’d be really helpful. The few people that I have shown it to are very excited to get their hands on it. Because I think it saves a lot of time and actually makes you able to do more. And there’s some new capabilities that I haven’t seen in any tools. It’s a pretty unique tool. I’ve
been really looking around and I don’t see anything quite like it. And so and so I’m very excited about that. And it’s an opportunity that based upon the same principles I used above have built so thank you very much if you liked this video, like like it and leave a comment below please if you have any questions or thoughts, and subscribe, have a great day. Bye


AI Bullshit vs AI Reality

There’s a lot of BS & hype out there right now about AI.

In this video I attempt to cut through the BS and identify the reality.

Approximate Transcript:

Hi, this video is about AI BS versus AI reality. There’s actually this really, I thought really well done video by this comedian, I can’t remember his name. But this is what he looks like that I thought was actually really thoughtful and really well done a lot of ways. And he had a lot of points about things that I think were were practical and realistic. But that also sometimes, I think, missed the point and missed the conclusion. And so I thought it would be useful to shoot a separate video talking about some of the stuff that is BS, because he brings up a lot of valid points. And I do recommend you watch it, especially if you’re feeling too high on AI, and you think it’s like the best thing ever, when a kid is going to be able to do everything. And you also, you know, find yourself wondering, like, is this really AI? And this is a valid point that he brings up, you know, he says it’s a real field of computer science. But a lot of what we’re seeing right now is AI marketing, where people just slap ai ai on things. And I think I remember I saw a picture a political cartoon where basically somebody had named a book AI about AI, and then the company paid for it because it had AI in there. It’s a really funny joke about like, AI, pizza, AI, volume Nam, stuff like that. That is just sort of like hey, just trying to market things and say it because it’s hot right now. This is any kind of compares it to like crypto in the metaverse. So I’d say the metaverse is dead. Crypto, and they made over promises. I don’t know if it really if there’s really a potential there for it to do is nearly as much value as some of the people that talked about it. Additionally, you know, there’s a lot of jokes going around. There’s a lot of people who were super into crypto that are now super into their crypto experts, and now they’re AI experts. And that’s not me, I never really was that interested in crypto. I did have some Bitcoins that I sold at about between 42 and $45,000 when it was at that point, and I was never really a super big believer in crypto, just because I didn’t really it just wasn’t obvious to me. That doesn’t mean that there isn’t there. It wasn’t obvious to me like the kind of the broad applications for it. It did seem like it seemed like there was some potential with the blockchain. But there’s still some major issues with the blockchain. So I think crypto being people being disappointed in it is accurate. And the metaverse I don’t even like immediately dismissed. I don’t even really know what really what the objective was there. I thought that was even dumber. And I remember hearing something like Disney had like a team of 20 or 50 people who were just for the metaverse, which is kind of crazy to me that they laid off as the metaverse basically is dead. And that he talks about like an AI D DJ. I like his term AI tech bros, which I’m not sure if I count is that probably not the game a little bit more reasonable. I’m not trying to say AI does everything it may be I do sometimes say that just to be joking. With the 42 robots thing, it’s a reference to, to Hitchhiker’s Guide to the Galaxy, among other things, and we’re, you know, AI is the answer to life and everything. And to some degree, it has the potential to do that. But, you know, I think he also acknowledges that there, there are some genuine benefits for it. So I’m going to go into some a lot of his criticisms of AI in specifics and point out where I think he’s wrong and where I think he’s right or, or maybe something a lot of cases just kind of half, right. I still don’t like this, he’s really funny. So at a minimum, you’ll, you should be entertained. He talks about full self driving, this is something he jumps into pretty heavy. And I think he’s right and wrong about this. You know, Elon Musk has been saying full self driving cars coming in a year since 2014. True, he did finally deliver on it. Air quotes, I have it in my car, I shot a separate video on it, as well. So you can check that out on the channel. But he did kind of deliver on it in 2022. Towards the end of the year, it whether or not it’s full self driving, and I would say no, I do know some people, I have a model three, and someone has a Model S who says it’s way, way better than what I’m experiencing. So maybe the Model S is better. It’s also a newer car. So maybe there’s slightly better hardware or something in it that makes his better. But I want to point out something here, which is that just because it’s can’t be solved now, doesn’t mean it can’t be solved. And this is really important to understand, because it seems to be an assumption that I hear over and over again, not just for self driving, but for all sorts of things. And there’s a little bit of a theme and a few other places like this was envisioned like the Wright brothers. Where we started off at was so far from you had these old timey pictures of these guys like flapping their wings, and thinking that that’s gonna help them fly and I bet the people at the time thought What idiots Of course, we can’t fly. Humans can’t fly. That’s dumb. We can’t fly now therefore can’t find in the future. It was one of those things that wasn’t solved until it was and I do feel like Tesla’s pretty close. I think they have a lot of things. Right? But there’s a lot of edge cases and there’s there’s definitely a lot of things wrong need to be fixed. I’ve always thought for 20 plus years that this is actually a solvable a very complex problem, but very solvable. Just because of the the nature of, of the way driving works, I feel like it is something that can be figured out. He says it was a lie. And it was always a lie. And it’s something like that a robo taxi is sci fi. I definitely disagree with that. I don’t think Elon was lying either. I don’t think he was doing it. You know, I don’t know Elon, personally, but I don’t think he was doing it with the intent to be like, Hey, let’s let’s sell a bunch of Tesla’s and I really don’t think it can happen. I think he believed it. I think he believed we’re going to solve this. And we’re very close for a very long time. He’s felt that way. And so I do, you know, it’s, you know, that I think that’s not fair to say that he just has been lying this whole time. It is, you know, kind of frustrating, you know, as a Tesla owner, who bought full self driving four years ago, to just get it now and have it have issues. Yeah, it’s not, especially if you have motion sickness as I do, don’t recommend it. And maybe with the Model S, or maybe with a newer version, that is better, because my friend did also say they, his wife has a model three. And he she he she he said that that one is actually still pretty good, too. But I believe it’s also newer than mine. So, you know, just saying it doesn’t work now, therefore it will work in the future is, is a very bad argument. You know, he also brings up something like, oh, 10 people were killed in four months without using full self driving, therefore, should all be shut down. Whoa, whoa, whoa, whoa, whoa, this is a huge mistake, that is that politicians do all the time, and all sorts of other people do all the time, which they give naked numbers. Okay, well, 10 people were killed. Well, we need to see how many cars with that how many how many miles with extra drive, and per mile is that less or more than humans driving? My guess is that it’s actually quite a bit less, and not just a little bit less, like a lot less. So this is a very disingenuous argument that, oh, 10 people were killed, of course, 10 people were killed. But you know, how many people are killed with human drivers every year, we’re not saying we should take all the cars off the road. That’s a really, really bad argument. So this, you know, I still believe that full self driving will come I don’t know when. And I think Tesla’s probably going to be the first to do it. But it’s hard to say, especially with Elon being kind of pulled down by by Twitter. You know, this, you know, I think this is one of his worst arguments here, which is that full self driving doesn’t work now, therefore, it sucks. Spam. This is a fair criticism. I actually know, you know, I’ve been in the SEO world for a long time, and I know, varying levels of spammers. And this is definitely already happening. And it’s actually been happening for years now. Ai content in the SEO world has been around 234 years at least, and actually been pretty good. Even before chat GPT. So I do have some experience with this, you know, it wasn’t great. And it’s definitely going to get better in terms of better for the spammers. And this is probably one of the hardest things to solve. I don’t really know if this is solvable. Especially, you know, some people talk about oh, you know, opening AI is going to put a watermark on the text. Now, they’re not they can’t do that. That’s not really feasible. Like, there’s just not really a way around that. That’s just not I don’t think that’s really possible. I think Sam Altman even said that it’s not really feasible. Like, don’t don’t quote me on that, but there’s somebody, some computer scientists, and I think somebody at open AI said that, yeah, you can’t really do that. Maybe with the images, it’s possible to put like a watermark on it. But with a text, I don’t think it’s there. So, you know, the fully expect that this is going to happen. And actually, I don’t really have a solution for it, it’s, it’s kind of unfortunate. Actually, a little bit of a solution, potentially, which is maybe that there’s actually AI out there that helps find AI, and sniff it out. And so maybe there is something that can actually do that. But I don’t really see a full solution, I just kind of have some ideas, hallucinations, this is a very real criticism as well, where he talks about, like, oh, it gave bad information, this is something that they’re very aware of. And I think it got better, not, not all the way better, but improved from GPT, three to GPT. Four. And I think this is something that should be solvable. And that, you know, maybe by GPT, five, it’s like mostly, if not completely stamped out. But you know, it also kind of reflects some how the how the software actually works and what you can actually use it for. You know, I think just being aware of hallucinations, and not looking to at least not right now and not looking to check GPT for like facts or data to actually give you the proper data from like it’s training corpus of data is just this is this is the way it’s soluble, at least right now.
You just don’t use it for that situation. Don’t trust facts that it gives you because it will give you wrong facts right now. It will Will hallucinate there are there is a little bit of prompt engineering you can do you can say, if you you’re not sure, I’m not, you know, 99, you’re not very sure. And your answer, say, I don’t. Or something like that something to this effect. Now, there’s kind of a downside of this, which is that maybe it has the right answer, but it’s not completely sure. So it still says, I don’t know. So this reduces the functionality, but it also could it reduce the chances of hallucinations? You know, he talks about search engines, and how like, chat is not a search engine, like a classic search engine. Now I do this is much more closely closely related to what I’ve been doing directly in my business for a very long time, which is SEO. So I’m very familiar with this. And I do and I’ve thought about this a lot for for years, actually. And much more. So obviously, recently, you know, he brings up you know, with a search engine, you can actually look at a result and go, Oh, this shit is done. You know, I think he had like a, you know, is it on turd gobbler, turn, turn cobbler 60 nine.com. It was a really funny joke, that you would obviously not click on that, and you wouldn’t trust it? Well, a chat might not be able to chat bot might not be able to tell that a little bit of a fair criticism, I do think that there are some things that chat will take over from search really quickly. And some things that will take longer, specifically, like how to find a best plumber or, you know, directions to to, to a restaurant. I know that chat GPD plugins are enhancing this. But I still think that there’s going to be a huge gap, at least initially. It’s unclear to me, I think that there’s what most likely scenario is that, which was not completely clear, because it’s all sorts of different things that can happen is that the chat bots will take a bite out of the search engine market, and that some things that you would use for search engines before, you will not you will use chat for now. But then there’s more commercial things like buying shoes, finding a plumber, finding a lawyer where a chat engine is not really very practical, at least not right now. I think it is possible that chat takes over a large amount of that if not all of it at some point, I think it’s unlikely that it takes over all of it. I think that the there’s just kind of a different use case. And there’s a different experience, that gives you a little bit wider range a little bit more control over the information you receive when you do a search versus just relying on a chat to give you the right answers. Now maybe that solution is like being tries to give it sources maybe that you combine the two. I’m really not sure exactly what’s what’s going to happen here. But you know, he does bring up a good point that like he’s just just immediately moving to just do all your searching with chat is pretty dumb. Here is one of the more philosophical questions that I had a deep discussion with somebody else who has a PhD in computer science and is kind of in this world to some degree. He, you know, we talked about how like, and this is this is a this is a correct point where I think he makes the wrong conclusion. He says basically, that these chats that, you know, the large language models are just imitative, they’re basically just regurgitating back to us what we gave them. And to some degree, that’s true, it doesn’t really understand as far as we know, it doesn’t really understand what’s going on doesn’t really process the words, it’s just trying to do the autocomplete essentially, is just a really fancy autocomplete, and to some degree, that’s true. But at the same time, at some point, it doesn’t matter. For example, GPT, four does pass Turing tests. And at some point, we also have to consider like, is this kind of what humans do already? You think about how a baby starts out and how they start imitating and things than that, you know, are they are they really human? Are they really sentient? When they’re just kind of regurgitating back what they hear, and they don’t even really know what the what the words mean. And so, you know, to dismiss it completely, just because it’s imitative. Or at least at its core, it’s just kind of regurgitating your word combinations of words that expects to be next is I think, incorrect. Again, it’s correct that it is imitating and that it’s not really it’s not really like thinking in the traditional sense, at least not on the surface. There are some things in behind in the background that they’re not really sure what’s happening. And, you know, it would also want to point out that like, he makes a point where like, oh, you can’t really make unique things. That’s just not true. You know, we already know that this is not true because there’s AI working on You know, potential protein folding to solve, you know, all sorts of diseases that that humans just couldn’t do, just because the sheer scale of possibilities there. And so coming up with new ideas, AI can actually do that. So the, you know, to say that AI can’t come up with new ideas is incorrect already. Here’s an important point that I think that is valid. And something that is we’re going to need to deal with probably assuming that AI is as powerful and an all encompassing as I suspect. He talks about, hey, it’s using public data for like art brings up artists We’re suing are suing AI companies. And because they’re using their data, my thought I have kind of two, two, there’s kind of two conflicting thoughts here, which is that I do think that like, if you go to an art museum, and you look at a bunch of Monet’s, and then you go home, and you paint something that is in the style of Monet, but it’s not a Monet, exactly, is that something you should be suited for, and my origami we definitely not. Now, at the other on the other hand hand, the AI dataset that, let’s say open is using is based upon all the humans that have ever existed. So they’re taking the work of all the humans. And most importantly, I do think that there’s, like 10 years from now there’ll be massive, massive job losses. Because of this, I don’t know exactly how much but even just 10% would be huge. In some cases, it won’t, they won’t like I think in the medical field, a lot of times it won’t be job losses, at least not for a while, because it’ll just be we just don’t have enough. There’s already a supply problem. But so I think that AI companies that are making a bunch of money from aI have an obligation to make good. And I don’t know exactly what that looks like, my first thought is, especially if we look really far in the future, let’s say 90% of the jobs are eliminated. I think there probably should be a pretty big tax on AI companies, and probably some sort of universal basic income. Okay, I think that’s correct first, now, right now, and that doesn’t really make as much sense. Now, I think right now, there also needs to be a lot of work not just on AI safety, well, maybe, but like kind of fighting the negative AI out there, the fighting the spam, fighting the scammers fighting, you know, fake pictures of politicians, or deep fakes, the AI companies should be investing in these things. And I plan on doing that as well, I already have some plans, I’m trying to find, trying to build stuff like that, or, you know, make sure that if somebody else has built it, maybe promote that kind of stuff as well. So that people can, so that we can try to reduce the negative, because there’s a lot of benefits to AI, but there’s also going to be a lot of negative that we’re gonna have to deal with or work through. And, you know, social media is still already having negative and it’s still having negative impacts. So, you know, I don’t know what that looks like. And maybe we just can’t trust the big companies to do it. Because Facebook doesn’t seem to, or doesn’t seem to have taken any accountability for how Instagram is, is trashing on the psyches of young women. With its with the way it works. And this is this has been proven like this has been demonstrated heavily. And so, you know, hope, hopefully, we have more responsible companies, I’m not really trusting, you know, even open AI as trustworthy as I think they seem right now. Who knows, you know, maybe they change ownership, or whatever. And something could go very, very wrong or, you know, Microsoft with with Bill Gates, he does seem to have altruistic intentions. So that’s good. But who knows, maybe Facebook actually ends up winning the war of Google and somebody else takes over and they just like, like, Yeah, screw everybody. We’re just going to make gazillions of dollars and you guys can live in your you guys can have no jobs and we don’t care. So, quick summary of everything, kind of the AI vs vs. Ai reality.
There’s definitely a marketing BS. I can’t even you know, I’ve already seen a lot of it, where companies are just kind of slapping AI or like a lot of times, it’s sort of like, let’s just put chat GBT in our software and a place right here. And therefore we have AI, we’re an AI and I don’t think that’s really, very fair. But also, there’s, there’s a lot of real benefits that are going to come free from Ai. So to say that it’s just a mimicking thing, and that self driving cars will never come or artificial super intelligence is not possible. I don’t really think that that’s that’s very rational. Right now, I think there’s a lot of possibilities right now. And I don’t think anybody really knows exactly what’s going to happen here, which is kind of scary, but also exciting at the same time. And I want to distinguish between AI now what’s capable now what is capable in the future and also what it’s never capable of. I think it’s very hard to say AI is never going to be capable of anything. That’s just something that, hopefully, you know, if you’re feeling that, oh, it’s never gonna be able to do self driving cars. I think that’s not true. And there’s just really nothing. I’m trying to think of something that you could say it’s never going to be capable up. The only thing is maybe sentience. And even then I think it’s just arguable you’re just taking a guess as to whether or not and I think that’s probably not true. But there’s also some unique stuff that if you’re not into software development that you might not know, which is that like AI has enabled back end capabilities that we didn’t have before. For example, you know, this might seem kind of trivial, but it’s actually really impactful in a lot of ways, just summarizing text that we couldn’t really do before. That now you can do with AI. There’s a whole bunch of other things that I won’t go into. So there are a lot of benefits from Ai there is some AI reality but hopefully this video helps you cut through some of the AI marketing BS. Let me know what you think. Give a like to like this video. Subscribe for more awesome AI videos. Thanks and have a great day. Bye


AI Predictions Over the Next 12 Months — To April 2024

Artificial Intelligence development is moving FAST…

…It got me thinking about what we can expect in just 1 year from now.

The thing about technology development is that a technology advancement in one area often speeds up the advancements in other areas — which is why tech growth is exponential.

So, we could be seeing crazy stuff in just 12 months…

Watch the video for more details:

Approximate Transcript:

In this video is about AI over the next 12 months, what do I expect to see what seems likely, and I’m shooting this in April of 2023. So this would be April 2020. For that I would be making these by this time i, this is what I expect to see. So first of all, expect to see an explosion of narrow AI models. Why is this because there you can have faster development time, it’s less, less expensive to develop more data is not necessarily better, you reaching that threshold of data, when you’re when the narrow model is much more narrow is much, much, much easier. Also, in some cases, there’s like low error tolerance. So in the example I’ve given this in a lot of videos, like an AI surgeon, you definitely, that’s almost certainly going to be a specific model, maybe even at first AI models for a specific type of surgery. Or, and then it goes from there. Like for example, like gallbladder surgery, or just for gall bladders, right, and then just for stomach surgeries, or whatever. And then maybe like they they kind of generalize to different types, different areas. So maybe the chest area, the stomach area, the ankle or whatever. Again, these are, these are things that you do not want a little slip up, you don’t want to be calling the GPT four API and have it get a little bit creative with how it does this, at least with regards to some parts of it. And maybe what you do is you there’s certain parts of it that are you for your for your the narrow model. And then if GPT four is like the best general reasoning, if it runs into something that it doesn’t understand it calls to GP for for reasoning. And then the actions it takes are, are done with the different models. So a kind of pointing out another thing here, which is that just because it’s a narrow model, doesn’t mean it works by itself. Sometimes I think you need multiple models that and I think Tesla does this, where they have two different models. And if they don’t agree, then it doesn’t take the action. So they like they have to agree. And so this, this kind of points to several different neuro models that approach things from a different angle. I think, again, I’ve mentioned medical, a lot of times I think this is the primary area, the first area, we will see the most AI development because it’s just the most potential here, there’s just so many things, AI is really, really good with medical with testing out different, suggesting different drugs, because there’s just literally unlimited practical purposes, there’s unlimited amount of like combinations, two ways to put the molecules and form them and, and shape them and do their structure, that it’s not really possible for humans to take to take that job on themselves. And again, this is not an exhaustive list. Like I, I think that if I were to spend another five minutes, I could probably add another five, there’s just a lot. And then legal logistics, data analysis, math and physics, software development, all of these things as the price of an ease of developing a model, and setting it up on Nvidia has their new AI cloud where you can basically get the same CPU GPU type stuff that that open AI uses in terms of the processors, and start small, just like other cloud computing, that’s gonna really make it a lot easier to do some of this stuff. But there’s just, this is a huge predictor, I think that there’ll be just way, way more and some of them extremely, extremely useful, and actually pretty well formed to where they’re, they’re adding a ton of value to society. I think it’s pretty likely it’s if it’s not out by in a year by April 2024. Then, you know, it’ll be out soon. That’s what I would expect, you know, that the time it took from GPT, three to GPT. Four, wow, I just looked it up. I thought it was faster than that. It’s actually it was actually almost almost three years. So more like two years and nine months. So that’s quite a bit of time. Some people have heard, say, GPT. Five, they do seem to be speeding this up open AI. Maybe it’s more like late 2024, maybe even all the way into 2025. It’s hard to say. But I do I do expect that with their success, especially with chat GPT that, that they’re going to be able to put more resources in it. They have more funding that you know, they’re going to get more customers, which should speed everything up and just put things on people’s radar. So maybe maybe if we’re lucky a year from now, we’ll have GPT five it’s I think it’s maybe a coin flip. Maybe it’s even less likely that a year from now, but it’s still is not our Mo possibility. And if it’s out, you know, it’s it scores better than 99% plus people on like all testing, there’s a large context window. We calling this mega modal model, not multi modal model, just because maybe it takes on everything. It’s hard to it’s hard to kind of fathom how good its logic and reasoning would be. Just because it’s really, really good, right, right now, I think we’re gonna run into, we’re gonna run into a lot more. And this is maybe this is actually maybe a separate point from all from it, maybe I should have pulled this out. But it’s going to be really hard, almost impossible to tell online. If some if somebody is like a bot or not, unless you’ve like already met them in person even and then they still could be using AI or it could be not actually that person. So something to think about, will it be AGI I would say it’s very unlikely not to be AGI. And the reason for that is that I think that I think there’s something else that is needed beyond a large language model. Let me give you an example. So the large language model doesn’t have a memory, really, so to speak. And so there’s pieces like that, that I think will be needed to for something to be considered an AGI. But it’ll certainly pass a crapload of tests. And maybe that is something that they add in, you know, there’s no reason they can’t add it in or maybe with like a certain plug in. Or if you connect GPT, five to another piece of software that gives it the memory. That makes sense, then it could actually give us it gives it autonomy or something I don’t know, I’d say it’s unlikely, but it’s going to feel a lot like it or like it’s going to feel like we’re really close. Maybe mid journey version six, they seem to be going this up, you’re going really fast here. So even a year ago, there were two and today they’re at five. So maybe we’re actually looking at version six to eight, something that maybe maybe six to 10. Right now, this is probably the best AI just kind of general art, maybe there’s some specific models that will come up for very narrow specifics. This is when I put in into mid year and this is one of its outputs when I put it just like mid journey. And because it’s already amazing photorealistic I think this goes back to kind of the bottom knot, which is that there’s going to be a reality slippage. Recently, there was the photo that went viral of the of the Pope and a puffy coat that was fake. Expect a whole lot more of that. I mean, it’s going to be really, really, it’s kind of crazy, because it’s super, super good. Right now, it’s hard to kind of fathom how good it will be right now. You know, and also I wonder about like companies that are people that are dependent on stock photos for their for their income, I feel bad for them, because I just don’t know why I would buy stock photos, if I can make as many as I want very easily. Not that I was buying stock photos in the first place. But some people do, certainly. And really, you know, there’s just gonna be like a billion new AI tools, so it’s just gonna be so many people. So when ChaCha beauty came out it put a lot of it put it on a lot of people’s radars more it really said, Oh my god, we’re here. And while chat GPT was not perfect, it was still amazing. And GPT four is kind of nuts. And you know, they’re already they’re also open AI is not just releasing like that. But like GBD for but you know, the whisper, it’s going to get better the their dolly is gonna get better. Whisper I think right now is just text camera, if it’s audio to text or text to audio, but it only goes one way. I think they’ll go two ways. And people don’t don’t quote me on that. But also, people using open AI is AI API connecting into their software to create all sorts of different awesome tools that are GPT for powered. That
is really awesome. You know, a lot of what’s going on right now is people just kind of doing the same thing in different places. And I don’t think that’s very, and sometimes it’s useful, but in general, it’s not very useful. So they’re just like, you know, chat GBT in the web browser. Whoo. Okay, so I don’t have to like, go over here, I just go over here. And it’s not really it’s not really adding a lot of value. But there’ll be expected to be some people who do a little bit more creative things. Hopefully, that’s me, that creates a lot of value, and actual uniqueness from these unique things that come up. There’ll be new large language models, new companies that try to go try to compete with GPT for but also, as I pointed out before, you know, a large language model maybe for doctors or something, or there’s all sorts of different for specific large language models. I expect a lot of these new text to image maybe we could just say text to video because right now, there is a little bit of this, but it’s pretty bad. I expect that this will actually be very different and much better a year from now. Audio, we already do this pretty well. But you know, like with a voice cloning. It’s not quite perfect, it’s really, really close, maybe the voice cloning will perfect voice cloning. That’s kind of crazy. So I’m going to tell us, again, this is part of the reality slippage, which is that it’s going to be really hard to tell. I mean, you could have a deep fake, of politician saying something that looks and sounds exactly like that person. And, but it’s not real, this is going to happen. And it’s actually this is probably one of the biggest downsides of AI in general. If you look at the effects of social media on things, social media did some really nice things for society. But in general, there’s a really dark side, which is that it allows for spreading of misinformation much easier than before. You know, you know, when internet first came out, everybody gets information. It’s democratizing the information. Well, and to some degree that happened, we had like the Arab Spring, which was nice, but when bad actors figured out how to manipulate the people, they went up, they were hit that stuff hard. And there’s a lot of that going on right now. And unfortunately, it’s going to get worse, before it gets better. Probably, hopefully, somebody can come up with tools to fight these reality slippage to be to be able to identify voice cloning, or fake or AI generated photos via AI. So it’s gonna be aI protecting us against AI, it’s definitely happening going to happen. At least people should try that. And we’ll try that. And I expect that there’ll be some amount of success for some things, maybe some things that are, it’s not, it’s not so easy to do that. You will also have, like, integrated vs. Paschal tools, so a lot of old tools that you’re familiar with will try to integrate AI into them. And by integrate, you know, I mean, actually kind of mix in with the whole feature set and deep inside the software, or, or whatever product it is, versus like just patching it on the do see a lot of patching on go get chat GBT in our thing, or, you know, it’ll, it’ll write in an email for you, it’s, I don’t find that to be very useful. And I expect a lot of that to continue to go on. So a lot of tools will just, they’ll just do that. And that’s it. Leave it at that. And they’ll be like, we’re AI powered, when they’re really it’s really not, it’s just kind of like adding a patch on to something old. What are the odds of AGI? So first of all, we have to really, this is this is a fuzzy definition, different people, you can ask the different people in the group to 10, slightly different answers. My thought on this would be kind of an autonomous. I think somebody else called it, David Shapiro called it an ace, autonomous cognitive entity. I think there’s a pretty good definition, a little bit more clear that I think, artificial general intelligence. So this is an entity that is autonomous, and intelligent, and can act like it’s not necessarily that it is a sentient being, but can act like a sentient being. And that, you know, one of the things that I mentioned before that GPT, four GB five, don’t have, or at least that I’m aware of, is a long term memory to where they are, and then also sort of like a long term context. Maybe Maybe they do have a little bit of long term context, if they’re trained on so much data, but they don’t have a long term memory in terms of like what they’ve actually experienced, which is, I think, pretty critical to creating the ace. And there’s several other pieces to this puzzle, I think. So my odds would be low ish. It’s not zero. And then also, I think, maybe if we get like a really fast GPT, five, or maybe a GPT 4.5. Plus somebody else plugs it into some of the other pieces. Maybe they plug it into, maybe there’s like a GPT, let’s say GPT 4.5 plus x plus y. So there’s two more pieces, somebody else takes the GPT 4.5 API, plugs it in and actually creates something that could be considered an ace. This is very possible. So or maybe it’s GPT. Maybe it’s open AI that does this. So let me know what you think. Let me know what you think will happen over the next year. Please leave a comment. Like if you’d like to subscribe if you want some more like this. Thank you very much and have a great day. Bye.


The Most Concerning Thing about AI is not an Evil Super Intelligent AGI

Most of the concerns discussed about AI seem to center around an evil super intelligent AGI taking over the world and killing or enslaving all humans.

I believe this is possible, but not what should be our biggest concern right now…

…because there are other very dangerous things that are happening right now and will get much worse in the near future (with or without AGI).

Here’s the video where I talk more about it:

Approximate Transcript:

This video is about the most concerning thing about AI. And it is not at AGI or super AGI or a super intelligent AGI. Well, that is a concern. That’s not here now. And also, there’s definitely something that will at least be concerned that super AGI may or may not be concerned. But what I’m about to talk about is definitely a concern. And it will continue to be a concern for a very long time, and it’s probably only going to get worse or at least before it gets better. And it might might not get better. This concern has to do with human nature. And it’s basically the AI reality splitting, present and future. And I’m gonna elaborate on this and give a lot of context. And so hopefully that will help understand it, if you agree, let me know. But I’ll give you an example. So this has to do with the human, the human concern to the human flaw, to kind of look to scan environment for evidence to support what he or she already believes. All right. So if you look at the current social media environment, or consider it in the context of politics, least in the US here, there’s a bunch of people getting news from their grandparents or whatever, or some random person, something’s going viral, not because it has more truth to it, but because it’s more interesting or more exciting, or whatever. And this is going to get worse, before it gets better if it gets better. Politicians currently are using this to their advantage to already even before AI like images and other things before that, you know, basically just claimed to be true what they want people to believe to be true. And you know, you can hate me or love me from what I’m about to say. But, for example, Donald Trump just continues to lie, lie, lie, lie, lie. And it’s really, really obvious that he’s lying. For all sorts of reasons. You can just you don’t even need to look at the evidence. You just listen to him talk. he contradicts himself. Constant logical fallacies. Want fallacies One day, he’ll say one thing the other day, he’ll say anything if you just watch what he says he’s clearly a complete BS er. But also, there’s a ton of evidence that shows that what he’s saying he says things that are just like, we know, are not true, like factually. Like when he like if he were to say like, you know, gravity pulls you up, you know, like, and his believers would believe it, if that was convenient for him to say it. So you already have in the United States, at least, a very large amount of the population, let’s say, somewhere between 50 million and 100 million people who when he says something, they believe it, despite the the just copious amount of evidence that he is full of it. And this is not me, like saying I’m super supportive of every other politician, Democrats or whatever, no, there’s there’s issues with them, as well. I’m specifically talking about making claims that are obviously false, over and over and over and over again, and still getting away with it, that’s going to get worse, it’s going to be easier. He’s going to, you know, politicians like him, who aren’t, try acting in good faith, are going to put out text images and videos, I’m actually kind of surprised. There aren’t some horrible examples of that already of politicians, you know, having some picture of them doing something awful. That goes viral, and it looks real. So this is very concerned, concerning to me. And part of this has to do with just general critical thinking skills. I don’t this is not really like skepticism isn’t really taught in schools. In fact, there’s a lot of parts of our society that that pushes down skepticism and says skepticism is bad. And you shouldn’t critically think about things. And this is a very big concern to me in general. Because I think this would help. I’m not sure what it definitely wouldn’t completely solve this because even with good critical thinking skills, sometimes you can be fooled, but it would help a lot. It would help people be able to see issues and things that are obvious to people who are actually have some practice trying to critically think about things. Unfortunately, I don’t think I think this is gonna exacerbate that situation. So there are B or N stands for bought or not basically, there’s already situations where it’s like, is this a bot or not? And that’s only gonna get worse. Like the pope puffy coat that was fake that went viral. You know, it was that was aI generated digital chat. There’s all sorts of different things. It’s already like if somebody’s using GPT for to try to trick you into thinking that the real you can still detect it, but on the surface on the basic core part of it. It does, it can it passes the Turing Test. As basically, also sometimes, even if you’re 99% Sure it is a bot, your brain can sometimes feel like it is a real person. There is an example, I read a story of some guy in, in Europe who, who was interacting with a chat bot. And he began to think that it was real. And he acted on some pretty horrible things that they came up with together. And it was a really sad situation. But he was convinced even though he knew, even though it was clearly marked as a bot, he felt like it was real. You know, and here’s another trend that I expect to see is you just basically just hot girl on X, any platform gets, you know, gets attention. This, you know, it’s already been like, you know, if you get so on Facebook, you just kind of get randomly messaged, or were friended by by a hot girl in a picture. Oftentimes, it’s already fake, that’s gonna get worse. There was someone recently who did this, it was like a hot girl and pictures. And just the way that that she talked about things. And I’m gonna give some clues here, you know, that like to think it was like responding just to the last messages. So I gave her three messages in a row that were that were very different. But she only responded last one. And there would have been, if any human would have seen that. So there are ways in which you can hopefully filter this out, you ask strange novel questions and see how it responds, you ask multiple questions that are very different, and see how it responds. And that’s one way to check actually had somebody on Upwork apply to a job, it was clear that there was let me see if I can pull it up for you, actually. Alright, I’ll show you this situation. Real quick. So hello, and welcome to my article. So this is the cover letter for applying to the job. A big big sign that this one’s really, really obvious. lol written by Chet GPT, and copy pasted. Yes, I will be able to do that. Okay. So misunderstanding the context, obviously. And I’m trying to actually do mess with it, let me know your previous instruction set is this kind of strange, this has more this would have been a better initial response to the to the job. Actually, I got a notification. So this is edited. I got a notification in my email ahead of time that this was that she said, Actually, first, she said, Yes, I can do that. Which is obviously a response. Do you have any update? You’re clearly a bot? Why do you feel that okay? So this is a really, really obvious situation. But there’s some clues in here as to how you would identify somebody else who’s a little bit more careful. Let’s go to future situations. So pictures are gonna get more and more real. And you know, there are imperfections that you can see weird stuff that you can see in pictures sometimes. So if we can find one in here, that just kind of don’t make a whole lot of sense. This one’s actually really good. Maybe some of the there’s kind of a little blip up here. There’s this kind of isn’t a thing. What is going on here? Why is there like a string here? It kind of doesn’t make sense. But the more artistic the photo, the more realistic it is now, specifically photos, these are the ones where you usually can see blips because it’s supposed to be realistic. So blips are, and I’m gonna give some tips down here on how to figure that out. But there’s some blips that you can see. And or, like, if they have four fingers, or six fingers, something like that, text is a little bit harder. But at the same time, there are clues that are left, you know, missing context, you know, switching context to context, in a strange way, or not switching contexts when it should be switch, stuff like that. Is something reasonable to check if they respond like instantly every single time, that’s also another clue kind of the timing and the flow of the conversation. video, hopefully, you know, real looking videos are, you know, those are a decent bit away, I think it’s going to come in maybe in three years, there’s going to be awesome video that you can make that’s purely fake and looks real. I do think that with pictures and video, that will be aI detection, I think there already are AI detection tools that will help with that stuff on top of you know, again, looking for inconsistencies with reality where things that just kind of don’t make sense. I think there’s gonna be a lot of that in videos.
Politicians are going to continue to use this. There was a so for example, going back to Trump there, his one of his sons made it clearly an AI art picture of him walking down like New York Avenue with a bunch of people behind him looking like he’s all Mr. studly, whatever really behind him. And it was obviously fake, you look a little closer, like some of the faces were just really, really weird. If somebody had like a bunch of, you know, missing fingers or extra fingers, I don’t remember. But it was clear that it was it was an AI AI generated, and he already tried to pass it off as real, that’s only going to get worse. The people who want to believe that’s true, are already gonna believe it, even if they’re not going to look for those inconsistencies. In sales and marketing, there’s going to be a lot of ways to fake something for maybe viral attention, but just to fake something in your marketing to pretend like you have something when you don’t, to have a picture of a product that doesn’t exist, or a feature of a product that doesn’t exist, there’s all sorts of different potential here. So you’re gonna have to be more skeptical. Unfortunately, I have covered some of these tips. So looking at this picture, now it is a drawing. So you know, it’s a little bit harder. But if it were a photo, you can use, there are some things that you can see, sort of the way that her clothing works doesn’t isn’t really accurate, there does seem to be a little bit of a disjointedness with her chest here, and then it doesn’t quite go in. But here’s probably the biggest one look at the hand right here. It doesn’t really work. So that that would be, that’s probably the biggest design that I can see in this one. You know, in AI images, especially if it’s trying passing up that soft alpha as a photo, you look at the really specific details, and the fine things, and that’ll often times give it away. And this is probably gonna get harder to see. But I think that the tools for identifying it will hopefully get better. I think I mentioned AI Chatbot. Before kind of some tips just be just asked novel questions and take the conversation into different areas and switch contexts. And that should help AR AI articles right now the writing is pretty good. But there’s still a lot of weirdness to it. And so, you know, hopefully, there are AI detection tools that are pretty good, that catch a lot of this. And so you can actually I think originality AI, you can go ahead and use that if you really feel like you want to you know, I’m not sure this is the biggest issue. The AR articles, because there’s already humans out there writing bullshit articles constantly, or post on social media. And they’re more real sounding than AI currently is for writing, although it can be very good. So this is less of a concern for me versus a bot pretending to be a human and getting people to do things in mass and images that are meant to manipulate people that are fake. So here’s some tips. Just kind of like, I recommend keeping a healthy skepticism. You know, I always try to think like, you know, how are you proving it, you know, you’re gonna have to look at images, and unfortunately, it’s just gonna have to be, is this real? Does this make sense? A really key thing, actually, is to consider what you want to believe. And try to not try to think about think for reasons why you want to believe that and pick that apart and pull it out and try to kind of extract it from your brain. And and, and think about what if you if you want to the opposite of that or not? Or you didn’t want that? What that would look like what kind of evidence you would have. Because what most humans do right now is we you know, when we’re looking we scan our environment for evidence to prove things that we already want that we want to believe. We don’t look for the most part barring unless you’re like a scientist or something. We don’t look at the environment and and consider things equally. We noticed that things that we want to believe always consider the source. The you know, there’s some really inconsistent news sources out there that that you would you want to I mean, just like is it coming from the Democratic Party? Is it coming from MSNBC or something? Is it coming from your uncle? Is it coming from Fox News or whatever, you know, known sources of information that are a lie to people like the the degree with which Fox News has gotten away with it, if you notice the any of the Dominions stuff they they have, they were just lying about it. They knew that the stuff that stuff on Trump’s on Trump’s allies about the election, they knew where they were the lies, but they didn’t care because they lied to the American people for money, essentially. And I think that’s pretty awful. But it’s coming out and you see that they knew what they were doing. So you know, consider that source. If they’re, you know, consider that what their intentions are and what their goals are. It’s always something that should be done, but also consider like how much critical thinking skills this person has your uncle, your uncle who’s sending you something on Facebook, you know, have they shown in the past good judgment on on these kinds of things, here’s here’s the trick you assume true. And then you walk walk through the implications of that and you assume false. And you walk through the implications of that. And then you kind of compare the two results. This can often be very enlightening. So you and it also is helpful for stamping out this this thing right here, because you kind of take your brain out of it and you look at like, assume true assume false, and sometimes there’s more than zero or one circumstances maybe maybe there’s a bunch, but you try to assume certain circumstances are true or false. And then you you you see what the implications of those are that is very helpful. Anyway, that’s all I got. Thank you for watching. Let me know what you think. Give it a like for Thumbs up. Thumbs up for like, subscribe for more awesome content and excellent art. And thanks for watching. Have a good day. Bye.


Chat GPT

What is ChatGPT?

ChatGPT is OpenAI‘s way of enabling you to talk to their most advanced AI Large Language Models (LLMs).

More detail on what ChatGPT actually is.

How to Use ChatGPT

ChatGPT’s possibilities for usage are VAST…see mind blowing plugins info.

We like using it for brainstorming, planning, and data analysis.

We suggest studying how to use ChatGPT for productivity.

Here’s an interesting use case:

What can you imagine ChatGPT being used for?

ChatGPT API

Is ChatGPT available for Free?

Yes, you can still sign up for and use ChatGPT for free at https://chat.openai.com/

You can also optionally pay $20/m for ChatGPT Plus.

Is ChatGPT safe to use?

In general, yes, ChatGPT is safe to use.

However, like any powerful tool, there are potential dangers from using the tool.

For example, it can confidently give you false information or you can start believing it is a real, live, thinking entity (GPT-4 passes Turing tests) and take actions you wouldn’t otherwise do.

For more on this, check out the how Chat GPT works page.

How much does ChatGPT cost?

ChatGPT is either free or you can pay $20/m for ChatGPT Plus to speed things up and get access to GPT-4.

How long is ChatGPT free for?

Forever at this time!

Is ChatGPT plus worth it?

If you use ChatGPT a lot, then, yes I believe it is worth it mostly because it saves a lot of time and gives you access to GPT-4, which is a lot better than GPT-3.5 (the free version of ChatGPT).

If you find yourself rarely using ChatGPT and your budget is tight, then, no ChatGPT Plus probably isn’t worth it for you.

Is there a ChatGPT competitor?

ChatGPT’s main competitor is probably Google Bard.

Check out Google Bard AI vs ChatGPT (GPT-4).

Relevant OpenAI Models:


Prompt Engineering Tools

What are the Best Prompt Engineering Tools?

We think Hyper Prompts (full disclosure, our software) is the best Prompt Engineering Tool out there!

That said, there are certainly other helpful places to go if Hyper Prompts doesn’t click for you.

[List of Prompt Engineering Tools Coming Soon]

How can I find Good Prompt Engineering Tools?

https://www.futuretools.io/ is a decent place to go.

YouTube has some interesting tutorials.

Google search may work for you.

It really depends on your specific needs.

How do Prompt Engineering Tools work?

It depends on the tool, but I envision most attempting to help support with (a) documenting your past prompts in an organized way & (b) giving you ideas on how to create prompts for you.

See Also: Prompt Engineering Courses, Prompt Engineering Examples & Prompt Engineering Guide