Demo of All ChatGPT Plugins Available in April 2023

ChatGPT plugins are an amazing new development from OpenAI.

They will dramatically expand ChatGPT’s capabilities rapidly.

If you’re curious what it is like to use the first batch of plugins, watch this video:

Approximate Transcript:

Hi, in this video, I want to go deeper into the plugins I shot another chat GPT plugins video, I didn’t go over all of them, I’m going to try to cover as many of them as I can. Many of my voice can hold up, I might pause to talk for a little bit because I do still have COVID. And, but also some of the plugins I noticed actually were working that weren’t working before specifically the Zapier plugin, and I’m going to show you as many of them as I can. So this one just didn’t couldn’t even figure out how that would work. Let’s go back, let’s do Zapier, we’re going to just uninstall everything so I can show you kind of from scratch. So he wants me to connect. Let me pull this window over. So it looks like I already have a bunch of actions. And that I have on here. But what we can do is let’s just add some new ones. And so right now, there’s still only six actions. I won’t show you all of them. But what I did before was these things didn’t work. I’m not gonna do the slack or the the Gmail was to do the Google Sheets one. I think that’s the more useful one. It still wasn’t working exactly correctly. And I’ll show you what I mean in a minute.
Enable action. Okay, so we got that one. Let’s also do the lookup
just have a I guess all these
Okay, so this should be working. And we’re just gonna close this now. Alright, so now we’ve got it enabled here. And I have a sheet when we pull it up. Okay, so here’s the sheet, Yoda isms. And what it what seems to do is it’ll repeat itself when it’s not supposed to. That’s when it lastly, the last time I did it, which was about a week ago, the first time I did it, it wouldn’t do anything. So what we’re going to do is we’re going to say, so we have wraps is the the I think they call us the sheet. And this is the worksheet, I don’t remember exactly how they name it. But let’s say please add, let’s see, ice cream, toast and pickles to kind of want to test see if it will do it on row six. And or it’ll just do the next empty row
to row six. Actually, here, let’s do
row six in cells in columns
A, B, and C respectively. Your pause while I finish typing this. So actually, I’m going to copy this. And I’m actually going to see if I need this last part because I believe it actually does need that last part. So let’s try this.
Let’s see what it does.
So I was not able to retrieve the information when I did it last time as well. Looks like I need to specify the worksheet. Okay, so you do need to specify that. All right, stop generating. Let’s try this again. See, even though there’s only one sheet within the Google Sheet or I’ll pause
I think it’s going to ask for confirmation. It’s going to show me a preview of the action.
Maybe a safety measure. So it’s weird Ice cream, ice cream, pickles. I could have edited it. But I just wanted to see if I just did it raw. And it didn’t do it on the correct row. It just did the next empty row which is interesting. So it’s better. But because in the past it did. I did three things as well. Excuse me, and it it it did one in all three. It got a little closer this time. But let’s see
what let’s try this. Now let’s try I
Okay, here’s the next next request let’s see if it can retrieve this information due to a variety of special rule being empty or the specific columns not containing any data
yeah that’s not what we’re looking for so it doesn’t it isn’t able to isn’t able to actually pull the information so it’s still not working but again this they haven’t released this publicly yet so I’m sure if they’re still testing and it’s already improved several times as I tried it let’s let’s go into some different plugins
the Wolfram Alpha one I did show that one successfully that that’s probably the best one so far that I’ve found let’s let’s just try this shot one. I can’t remember what what happened with this one. It’s too sharp turn off Zapier.
Let’s try this
I will show you how I was doing this it’s really interesting how it gives you sort of the some of the context there I don’t know how useful this is to be honest, because I wonder if it’s like if it’s actually finding like the best price is $109 million. What made them pick this this site? Over Amazon?
Okay, $540 million
All right. Maybe it’s like a classic shoe or maybe maybe they actually did find like the best one. Alright here’s, I guess this is eBay newish. Me It’s kind of cool. I like this, this image down here. It’s pretty, that’s pretty neat. I don’t see. I don’t see myself shopping this way. It feels like it still needs some more comparison aspects like maybe they pick a shoe. And they find several places where you can buy that shoe and give you several options as opposed to just one option. But it’s interesting. Oh, it’s so running in pause, not sure why this is still running like this because I did some other stuff for a few minutes. And it just seems like it’s done. So we’re just gonna try another one. I looked up some information on this right here. And it looks like they have I showed this last time they have joined the waitlist. It looks like it can’t even test it. So maybe they’re trying to put stuff in there that looks like maybe open I own some of it. Or maybe they just put that on there for fun. Or Y Combinator which I guess is a major thing in Silicon Valley. And there’s more information here. I can’t work I can’t really use it doesn’t see it doesn’t actually do anything as far as I can tell. Let’s try Expedia. I’m gonna actually tried this one yet. So
I would like to
I’m just gonna do this. Please help me plan a trip to Hawaii in July of this year. Always helpful to be polite to the to the robots. Okay, so it didn’t quite notice that. That that didn’t pull from Expedia. But did it ask some questions. So maybe it’s just waiting for the Expedia? What city let’s just do Honolulu
was actually pretty good. wasn’t quite sure. It didn’t have all the information with Expedia. Let’s see it. Let’s see what it does.
And pause. It is interesting that it assumed July 1, July 4, which is kind of weird. I didn’t give it any dates. What I might be able to do is say give it some specific dates it’s going to click through this while it’s finishing that up and see what see what this link does. So it has a location like a place to stay at Norfolk it hasn’t given flight information yet. It did automatically fill out the check in check out that’s kind of nice. That’s actually pretty useful. I think, actually, this is. This seems more useful than a shopping app just because I could see starting a conversation with chat GPT planning a trip, asking the questions about like, or giving it like, this is what I kind of wanted to trip. Where do you think I should go? Blah, blah, blah? And then midway through the conversation doing alright, that finally these things. Now, would that be more is that more efficient than then just doing the research yourself asking the questions in chat GPT or using GPS for for that, and then doing your own Google searches? Or just going straight to Expedia and doing the search? It’s unclear.
Let’s see. So you can also find flights activities, car rentals for your trip, actually.
Okay, so here’s my response. So it did change it to this. But I also wanted to test to see if it could kind of handle multiple options and do kind of comparisons, and least so far doesn’t look like it’s it’s done that when I click through, it does have the correct check in checkout times. I also wonder if there’s a way to limit it to guest rating of nine and above. I think that might be pretty useful. And I want to let it finish to see if maybe it gives additional options here. And then flight info.
Okay, so Alright, so here’s how I’m gonna do I’m gonna
add another message here. Alright. So can you give me only lodging options with a guest rating of 9.4? or greater? Okay, so it acknowledged the guests rating of 9.0 or greater, let’s see if it if it fulfills on that, that request. Okay, so it was successful at keeping all the guests ratings above nine. So that’s good. That’s for flights.
I’d prefer a direct flight. I think that there are some so I’m asking if it has some walls getting that there is a point? I think that’s important here is this seems to take longer than if I were just do a Google search and kind of poke around myself. And you know, each time ask a question, it kind of has to restart as opposed to like, if I was actually on Expedia, I do feel like this process would probably be faster. But that said, that doesn’t mean that there isn’t a way to improve it in the future. And that this is just kind of a starting place for it. Okay, it didn’t quite follow the context of the on this, because actually returned direct flights. And maybe I should ask that again. Let’s try this. Are there any direct round trip flights, and that was faster than the hotel one it generated that let’s click through here
so did look like it did it correctly? These are all one stops. So it didn’t it didn’t filter for non stop. That’s fine. That’s a flight 115. And obviously, the flight numbers on there. Okay, let’s move on to something else. This seems like okay, but not necessarily an improvement over just doing it yourself through expedia.com.
So I guess this is this, we can compare Expedia kayak. Let’s do that. And actually, we’ll do go back here and copy this.
Okay, so here’s our prompt. I added a little bit, a little bit more specifications to it. As I work through it, and ask me the questions. And yeah, this will hopefully give us all the answers we need right out the gate. Now the way that plugin interacts with chat up, some of that is up to 10, GPT. And a bit some a lot of it is up to kayak. So maybe there are options will be better. Maybe it’ll be faster. But notice how it’s just taking time and this is with GPT 3.5 GPT. Four will be even slower if they integrate that into their Okay, so I want to point out I did ask it a lot more in one question than I did with the other one but it’s already been a couple minutes and It’s still not done, it’s still kind of processing this. So I’m gonna pause it for a little bit longer. Interesting that there’s an error in here as well. Okay, it took a while and it does look like maybe this will be forever spinning just like the other one. I liked that it offered. So the the expediate plugin didn’t do as good of a job in understand that I wanted to look at two different timeframes. And also, I feel like the response here is much better, because instead of listing out specific flights, it doesn’t really make as much sense. It. It, it said, Hey, there’s there’s multiple options. And if we click book here, let’s see, let’s see what they if it’s actually filtered for. So it isn’t filtered for nonstop. But But where are the dates? Let’s see, it does have the dates correctly. So still still better, because the other one just was unnecessarily wordy, wordy. All of these do have a nine rating or higher. So that’s good. Let’s double check. The dates are correct. And it does match. Also, the link that I clicked here for the second option is accurate. So this is good. So far, I’m liking the kayak plugin more, I think it’s better. And it’s I feel like it’s more concise. Now maybe that has to do with the fact that I gave it a better question up front. Up in the air, I’m still questioning whether or not this as it stands now is more or less useful than just going to kayak.com in in filtering these things out. It could be useful in the context of a conversation. But it could also just actually add more time, because it does take quite a bit of time for this to run out. And hopefully they’ll speed that up. Alright, let’s try another one. All right, haven’t tried to open let’s let’s actually just uninstall these because I don’t really care. Straight Open Table. So open table selected. So because here’s my prompt, I would like to eat sushi for dinner tonight. And what’s planning? Can you give me some recommendations, please? Do want to unpause. So you can see how slow this is even with GPT 3.5? Because it does it is taking a long time. Right now I would say this is a far inferior experience versus just going to Google and typing in sushi restaurant. They’re not mentioning reviews, maybe I can ask them to give reviews to it, if I click through are actually used open table. So they do have reviews on if we can ask it to list the reviews for all the locations. It also appears to have gotten the location incorrect because it’s doing Singapore.
Singapore Yeah, that’s definitely not correct. Well, so there’s not really a city called West Point. Oh, it’s just playing it was really big. And over on the west side, and I was curious if it would get this right. So let’s try this again. Alright, I’m gonna try this. Yeah, this works. I kind of want the Google Maps location. I really don’t want the OpenTable. Link. I’m not trying to make reservations. Let’s see if it gets it right. Wow. Okay, so this is far superior to the initial response. Let’s see if the Google Maps link actually works.
Did a Google Maps? So didn’t actually find this is not bad. Actually. It’s interesting that it would I search this out it was still not bad. It wasn’t it was the wasn’t a direct link. Let’s see if this one worked. This one also did not work. But it’s still pretty fast. That’s actually not not that bad. It’s convenient. So it looks like there’s multiple locations.
The review count isn’t quite accurate. So 204.5. And wonder if we would need to have Wolfram Alpha integrated for that to get that exactly correct.
Overall, it’s okay. I think the biggest limiting factor here is really going to be the speed. It needs to be a lot faster for it to be useful. And it needs to give more comprehensive data right out of the gate instead of just listing this also it was weird that it did Singapore that doesn’t really make sense that it would pick Singapore I did make it a lot more clear with Plano comma, Texas, just to do a West maybe I could have said the west side of Plano, Texas, that might have gotten a better response. Okay, my voice is running out. So we’re gonna have to make to do just one more. Let’s see. This one didn’t really work.
kind of curious about that.
It’s really weird. Okay, I kind of want to do open Instacart and the shopping one.
Let’s try Instacart.
All right. Okay, here we go. I did add Wolfram Alpha, because it might want to calculate certain things about healthy stuff. So just in case, and let’s see, let’s see how it does with us. Now, it might actually be better to do the meal plan separately in GPT. Four. But we want to make this as easy as possible for the user. And I do think that they will integrate this into GPD. Four, I don’t see any reason why they wouldn’t. Okay, so it actually did a pretty good job. It did get the correct the basic meal plan. I’m not sure how easy or hard these are, I’m not much of a cook. We’ve got one vegetarian, one salmon and three chicken. It didn’t really give me instructions. But that’s fine. It didn’t really ask for instructions. And then it asked for me to add to Instacart. Now I don’t have an Instacart account. So we’ll just kind of see what happens. Maybe I’ll create a free one just to test it. Okay, so actually, I did have an old one. And I think I’m gonna be able to finish all the plugins because the cough drop is working for me. Okay. So I’ve already logged in. Let’s see this link the same looking at the bottom left. Okay, so it’s the same link was clicked. Martin logged in, was not my actual address. So add zero items to cart. Is it still loading? I can’t quite tell. No, it doesn’t appear to be still Oh, there we go. Boom, boom, boom, boom, boom. Okay.
Who used to shop for lease?
Let’s see, let’s find the car. Stop. Okay.
And then this is actually incredibly useful. I would rather it not use Instacart or revenue, something else. Not a, it’s useful, it was very useful during the pandemic. But, you know, if I find it to be pretty wasteful, I’d rather just go to like, Kroger to go, or you know, pick up, that would be less expensive, faster, easier. But that said, this is pretty cool that this actually is, you know, commented on the other ones not being very useful. This is incredibly useful to just add all of this with the right quantities to create a meal plan from scratch. And to just add, if the car does have, I could just buy it right now and have it show up at the door. And boom, that’s very, very useful in already. And I could actually see some improvements on this as well. It could, you know, there could be, you could have your preferences or something already in the app or something, to where like, what kind of stuff you like, or kind of balance with your meals that you want. And it could just be like, Alright, make me a new meal plan. And, and go from there. This is really, really useful and cool. If especially if I was, you know, if I used Instacart on a regular basis, which I it’s been a very long time since I have used it. I would love to see Kroger make an app for this, or you know, maybe all these or something. It was a quarter right next to my house. So that would be more useful. And I think it would be much less expensive, and less wasteful, to do it that way. But you know, that’s this is still really, really cool. Alright, let’s go back here. And fill out we’ll leave Wolfram Alpha in there. Because it seems like sometimes that needs to be integrated. Let’s check this one out. We already did a shopping app. It’s actually go to basketball shoe shopping. Let’s do the same comparison there. So let’s see what this gives me is the first the exact thing I did on the first one, I think it would be better to do like a prompt. And I’ll build the prompts separately. That involves like, compare it you know, automatically comparing multiple places. So don’t just give me a product in one place. Give me a product in three places where I can buy that product at a low price. For Good price or whatever. And that it is interesting that it’s giving me some of these properties. That’s, that’s pretty useful. I don’t remember if the other one did that. Oh, see, but it’s okay. 10.5 is here. I’ll pause, let it finish. Okay, let’s see what it did. And we can also compare the results to this one seems a little weaker, just to be honest. I kind of liked the detail. It does. They didn’t do the pictures, right. So that wasn’t as good. And they also just gave me a bunch of different. Let’s see, it’s, I guess it’s all what is Klarna? So it’s just all okay, so it’s an aggregator. Okay. So this actually, I think, makes more sense. Because instead of linking directly to the different pages, it gives me a comparison so I can actually go on here and find the lowest price. And it looks like it’s 131 Did it list that correctly? 131 Okay, so this is definitely superior. I like this a lot more. And I think this is actually useful again, it’s very, very slow. And yeah, I think this is pretty interesting, and has some potential. Alright, let’s move on. To uninstall this. So this one didn’t work. Also just seems really really not useful at all. Access market leading real time data sets for legal political and regulatory. Okay. I wonder if we can just do
please tell me what fiscal chat
GPT to plug in to. This should be a part of every plugin so far actually. I don’t think that they’ve done this I should go test this on the other plugins as well. Okay, so it’s giving us a good answer. Okay, this is interesting
So, this is really weird. I guess this is like SQL. I don’t know what what this what this is right here. Maybe somebody else knows if you do put it in the comments about the White House calendar this is really interesting. I have to try this on the other ones now. So let’s look at like
please give me
let’s see what this is let’s see let’s see who’s doing their job and who’s not who’s who’s at work and who’s who’s playing hooky.
May be better served with a Google search not really sure exactly what to do with this I do feel like there’s probably some usefulness to it if you’re in the government news this could give you an enable to chat with it I don’t definitely don’t see myself using it. We’re actually going to do is we’re going to go in and we’re going to look to this for everything please tell me what the Wolfram Alpha let’s just make sure that they all can do this. Because they should definitely do this there should be a way to be like I think the Meelo one is the one that did it didn’t do it last time it didn’t it didn’t understand what was going on okay, this is this is this is pretty good
by entities such as countries Oh really? I didn’t know that. Yeah, this is this is this is one of the coolest parts
so you don’t necessarily need Wolfram Alpha to do this unless maybe it gets more up to date information. And maybe, I guess you could have also done this with GPD for Alright, let’s try a different plugin. Let’s go ahead and uninstall this one. Alright, let’s let’s see if they fix this because this this was the worst performing one so far. Okay. What’s it called? Nilo? Famy from the
NEMO
okay, it didn’t do this correctly last time. Again, you know, I noticed this is the In alpha, and they haven’t released this publicly, I have developer access. I tried to get a, a plugin developed in their timeline, they had like a cutoff date of April 11. And I had just gotten access like a week before, put my developers and I have a big team but but my developers on on it as we without trying to drop the other urgent things we had, and couldn’t quite get something delivered in a week. It sounds like it was from a developer’s that it was a little more less straightforward than it seemed to actually create a plugin that worked fully. So good that good on them that they that this is actually working, because it was not working before. Let’s see what I’m going to read this. Okay, what’s magic today?
This isn’t quite what? Oh.
This isn’t quite what the website sold it as so it actually seems pretty different. The website more seemed like to help you manage your family. So shopping lists to do lists schedule, that actually sounds more useful than this, because I just go to GPT four and be like, give me a fun thing I can do with my kids today. So now maybe if this includes that in there, and it’s better than like what GPT four or Google will provide, then maybe that’s useful. Or maybe as part of a greater conversation it could actually be useful but at least it passed that that test we just talked about Alright, let’s go through this process again.
Close action settings to Zapier.
Okay and action called conversation
Okay, that’s pretty good enough. I’m actually just kind of curious if any of them are unable to give a response. Let’s just install them all actually.
Close
this is interesting. I should probably ask this first before I did it before I did anything. It is giving some pretty good detail here and I think some reasonable ideas on how to how to use these things. Expedia, if I can ask both
ah no conflicting that’s funny
it seems like a better type of response something that doesn’t seem like it’s made for coders. It’s something that’s made for actual people to to look at is a good response actually, it’s not too long but not too short. It gives an I like the number of things it makes very easy to read and see what it actually does. Let’s compare that to kayak
Okay, not quite as good as Expedia one is kind of giving it in, like coding this endpoint. That’s a that’s a coding type of reference. It’s still not terrible All right. So far they’ve all been able to do that. That’s good. That lets me do open table and I guess there’s two different ones at the same time.
Okay, this was pretty cool. Save me a couple steps. Okay, so so far they’ve all been able to enter that.
Speak in this will be this speak in Klarna. Quantum shopping. What? Speak to one.
Okay, let’s see what it did here.
Okay, so they all have it now that’s great in, it’s actually pretty useful thing and maybe I can expand it over time. Okay. Okay, that’s all the plugins right now, at least at this point, I know that they’re trying to add some more plugins. They’re still a little bit rough around the edges in some of them. And hopefully you found this helpful. I really didn’t see I haven’t seen any information out there other than just real quick, you know, that original video that opening I shot and really not much else on what what it’s really like on the back end here. And the experience overall, there’s a lot of potential, it’s very slow, and so and wonderful because it doesn’t use the legacy. Remember, if it uses the legacy, it’s possible that it’s because it uses the legacy which is much slower than the turbo. But it’s just very slow. I tried to posit for every single one because you this video would have been three times longer. Otherwise, it’s already a pretty long video. Well, if you’ve made it this far, I appreciate you. Thank you very much. And hopefully, you know if made it this far, subscribe and like and comment and send smoke signals and everything. Thank you very much. Hope you have a great day. Bye


The Implications of Large Language Models (LLMs) Hitting the Wall

Recently, Sam Altman said, “The Age of Giant AI Models is Over.”

What he meant by that was, “our strategy to improve AI models by making them much bigger is providing diminishing returns.”

So, I thought it would be interesting to explore if LLM’s hit the wall and have improvements dramatically slowed.

Approximate Transcript:
Hi, this video is about large language models MLMs hitting the wall and the implications of that. In case you haven’t heard, I shot a separate video about this. But Sam Altman recently stated that the age of giant models is over, which I think is a bit misleading. Basically, what he was saying was, you can’t improve any more by just adding more data and more parameters. And this makes sense. And this is something that some people have predicted was coming, because GPT, four just capture so much of the data. They didn’t release it. But if you look at like the GPT, two had 1.5 billion parameters, which is sort of like the amount of neurons or the amount of different kinds of factors that it considers GPT. Three had 1.7 170 5 billion. We don’t know how many GPT. Four had has, they didn’t release that. But estimates are that it’s a leap over GPT. Three. And that also, that potentially, they’re kind of out of data. Now more data is being created every day. So it’s really they’re out of data completely, but perhaps just there’s not enough to get like that exponential leap. But also, I think he implied and this makes sense that sometimes more data just isn’t necessarily better, doesn’t necessarily give you an a better answer to get more data. And I elaborate on that again, in my in my recent other video. So you know, let’s assume for the sake of argument that that large language models and opening I included, hit a huge wall, and they are maybe not unable to move forward, but their progress has slowed dramatically. And we don’t see anything like what people think maybe GPT, five should be for five or 10 years, that maybe there’s another technological development that needs to happen. So what comes about because of this, let’s look at the good. I think probably the biggest thing is for the world to kind of catch up mentally on unlike, you know, especially when it comes to misinformation being spread, and identifying that and helping people adjust to that new reality that we’re finding ourselves in right now, this year 2023, that’s probably the only good thing I can think of maybe the pause, the people who were in favor of a pause is just kind of happens naturally. I personally don’t think that the pause is a good idea. And you know, there’s three dots here, because I don’t really see a whole lot of good coming from this, I’m sure that there’s plenty of people that will be celebrating this, if this is the case, I will not be one of them. The bad, here’s here’s what I would say with the bad good tech is slow down, there’s a lot of really good use cases that really dramatically can help people’s lives that is coming about because of the AI models. And now maybe this in some cases, this doesn’t affect that in some cases, it likely will. So you know, just to give an example, there’s a bunch of different stuff with regards to health care, you know, saving lives, curing diseases that that AI is actually has already shown to be quite proficient at and moving forward rapidly. So perhaps that slows down to me, that’s bad. I think there’s also an argument to be made for this could actually be better for bad actors. And the reason for that is that I think that opening I’m moving forward will actually help tamp down the bad AI models, as they have demonstrated to me pretty thoroughly, that they do have good intentions. And that if there was a bad model that that GPT for GPT, five could help kind of tamp down, identify, fight back against that they would work on that and help with that. And so I think that this actually opens the door for bad actors. And it’ll it’ll make sense when I get to this last bullet point. Let’s look at like, kind of, like how good is GPT for right now. And I would say that it’s really freakin good. Like, I was trying to test the other day like, you know, it’s supposed to be bad at math. And it actually did a pretty good job of math and showing its work. And it got it right. Not like a super complicated thing. But more complicated than what you know, other people were saying it was, it was it was wrong. And I need to add the hallucinations here. So but there are still some things that it struggles with math, as we mentioned before recent events, hallucinations, I think that there’s some more if you want put put them in the in the comments below if you have any other ideas, but it still struggles with some things, but not a whole lot. It does a whole lot really, really, really well. So you know, I think right now, it’s actually at a point that is pretty profound, just GPT four as it is now. Now. So Sam Altman did state that there are other ways in which they are looking to improve it, and I believe I believe them. And but maybe it’s just slower. Let’s assume for the case of this argument that it’s slower. It’s just kind of more minor updates that come together more further down the line in terms of years to create a more complex hints of bigger change, which is kind of what they said, they did say that a lot of their improvements, were just a bunch of little ones that kind of worked all work together, or to create where a whole is greater than the sum of the parts. How much can it really improve? Now, the plugins, the chat GPT, plugins, actually does have a lot of potential to shore up the weaknesses. Specifically, I shot a video on Wolfram Alpha and math, if you know those things work well together, and it worked pretty good, then then that’s a huge weakness, recent events, there is some way around this, to connect it to the internet, to some degree, where to pull information from the Internet, put it in your own database, that’s, that’s very recent, I do think that that will be helpful, I’m not sure about the hallucinations. Whether or not like plugins are really probably not going to really help with that. This is probably one of their biggest challenges the hallucinations. And it’s a it’s a real, it’s a real issue that needs that reduces the value of GPT. Four. So I mentioned that they’re working on it pretty hard. And I’m optimistic that they’ll be able to solve it. But you know, who knows, it might take five years before they’re like, alright, we’re, you know, it rarely, if ever gives hallucinations. Longer context windows, you know, the, so they increased from GPT, three to GPT, for technically, the context window by quite a bit, at least, like the maximum of 32,000 tokens versus I believe it was 4000 tokens. So that’s an 8x increase, which is an order of magnitude that’s pretty substantial. You know, I don’t think that this is really necessary. I know that some people were like, oh, you know, I could, if we had even more with 32,000 tokens, I think that’s like 5200 pages of content. You know, if you had even more, you could just put a whole book in there. But the problem with that is that more data is not necessarily better. And I think you get diminishing returns, and you you kind of watered down the things that you want to see if you have these huge context windows, and you dump massive amounts of data in there. So I don’t necessarily think this is a big improvement, I think the context window right now is quite large. And there’s always going to be a limit to it. And so, you know, this is a problem that developers and people are going to have to deal with that is being worked on. And I think there are there are solutions for it. Recent data, I think this is something that a plugin would be able to help with significantly. And I do think that there’s, well, I mean, they seem very resistant to adding a new dataset, maybe it’s because they spent so much time and money and energy training GPT four on that data set that they had, and recreating that dataset and retraining it again, might cost them, you know, hundreds of millions of dollars. So it’s possible that they’ll just kind of look to Band Aid it with plugins. And but that still is to me kind of a band aid, and maybe that there’s some way, you know, being does have sort of like use your use current search results, I think that’s helpful. And potentially, I think that they have in jeopardy for also something where I can actually go to a website and use that as a reference. So that’s really helpful. Again, I think it’s a little bit of a bandaid. So, but I don’t think this is a huge issue, because I think that, you know, there’s you just just knowing this limitation means that you can use GPT for just fine for pretty much all use cases, almost all use cases, you know, barring the one, to me the biggest issue, which is the hallucinations. Alright, so the biggest area of opportunity for AI, even with this, even if, even if GPT four is the exact same level of quality five years from now, there’s still a crapload of opportunity. For, I would say it’s, it’s it’s necessarily business opportunities, just kind of human opportunity, although I think the context of business makes a lot of sense. And it’s what are called as narrow AI models. And these are models that are made for specific situations, I see a lot of models out there that are broad general models that that are trying to, you know, be AGI they’re trying to be a generalized intelligence. And well, that’s great work. And that’s really, really helpful. I think that there’s just so much value you can get, by narrowing the focus of a model of an AI model to a specific use case or set of use cases that target a specific market. It’s way less expensive to train, you can get higher quality results for way less parameters. And so you know, I think also you can even take consider taking these narrow models and making them large language model size and the quality that you might get from that I’m not sure you really need to do that in a lot of cases. But these narrow models are going to, I think shine over the next two or three years regardless of what happens with open AI and GPT for GPT five.
I mean, there’s so many use cases still to be developed. Think about how many different pieces of software are being used right now. And every single you know, that’s the amount of quantity of possibilities for narrow AI models at least and potentially more, because there’s so many different use cases. And there’s so many different ways, especially with the the low cost based models that you can take right now that are being published out there that are open source that cost $500 $600 to build and train maybe even less and still be really really good as a general models, you take that you kind of train it a bit more for your narrow specific case. And boom, you have a very, very powerful model for a very specific use case. I think this is where a lot of the AI investments should go and and I think that the people who do that are going to be rewarded greatly. I plan on doing that and more on that another time. So thank you for watching. If you liked this, please like and subscribe for more videos and have a great day. Bye


“The Age of Giant AI Models is Already Over” says Sam Altman, CEO of OpenAI

This statement by Sam Altman is provocative…

…there seems to be an implication that giant AI models are no longer useful…

…but this is not what Sam means.

Approximate Transcript:

Hi, this video is about something that sounds really profound that Sam Altman said, recently, the open ai ai CEO, he said that the age of giant AI models is already over. I think this statement is taken out of context is a bit misleading, because to me, and I saw a smaller, kind of a smaller headline that I clicked on that made it seem even more salacious is kind of is he saying that it’s just like, chat tivities done?
Like, it’s not good anymore? That’s not what he’s saying. That’s kind of what I would my first reading of it.
It’s like, oh, we’re not going to use them anymore. No, he’s, they’re going to use the large language models. What he really means by this is, is that they can’t make they can’t really grow the improvement of them by making them bigger. That’s, that’s the short answer, there’s a little bit more context I want to add as well, which is that this is this has been the philosophy of open AI, from the beginning. And for quite some time, there’s a, you know, there. ndarray, Carpathia, very famous in the AI world, I believe he was the head of AI at Tesla. And then I think he’s actually at open AI. And now I remember, I’ve watched several of his videos, and one of the things that he talked about, was that, number one, the code for these AI models over the last basically, since 2017, when Google released their transformers paper, the code is very short. And it really hasn’t changed a whole lot. It’s like, I think 500 lines, which for code is very, very small. And then he talks about sort of like the, I believe it was him that the the strategy, the way to improve it is just make it bigger, you know, just keep making it bigger, add more parameters. And parameters are sort of like neurons. And to give context that they show it here in this article of GPT. Two had 1.5 billion parameters. This is funny tag line to be generated by artificial intelligence, I wonder if this is like an AI movie, or series about AI? Anyway, 1.5 billion, and then GPT, three 1.7 5 billion parameters, and it made it way, way better. And that was a large reason for the improvement. And then GPT, four, they didn’t announce how many parameters there already it but it’s supposed to be much bigger. And so what he’s saying is by adding more parameters or neurons, it’s not going to improve the model, there’s diminishing returns in that area. And up to some point, this is going to not give you more, I think another way of looking at this is also more data, it doesn’t necessarily add improvements to the quality of, of the of the model, but just in general, from a standpoint of like data analysis, more data isn’t always better, doesn’t always improve things. And, you know, just real quick aside, if you think well, why should I believe you about data, basically, for the last 20 years, data has, I’ve done data from a theoretical and from a practical standpoint, you know, I have a master’s degree in Industrial Engineering, which is closer to actually data science than it is than it is engineering. And it’s worth a lot of lots of statistics and analysis of huge, weird datasets. And then, you know, I worked at a semiconductor factory where there was, there’s a lot of complicated data, you know, spreadsheets with 10s, of 1000s of rows and dozens of columns. And, and I’ve worked there for about six years. And then for the last 11 years, I’ve done SEO, which is another kind of like practical data analysis, this is very different than the semiconductor, but still, more data. So I’ve been studying data, it’s been my jam for a very long time. And it makes sense, sometimes more data doesn’t add a clearer picture to the situation. And so this in they have talked about actually, this shouldn’t come as a surprise, even though the headline is kind of like, whoa, this shouldn’t come as a surprise, because this has been talked about for a while that number one, they’re going to run out of data to crawl. And that’s not entirely accurate. Because more data is being created every day, more and more in that rate of increase, that rate of new data is increasing over time. But it certainly hasn’t been increasing at the rate at which they have increased their models. But additionally, it doesn’t necessarily help again, help kind of clarify the situation. I think I’ve got a reasonable analogy. It’s sort of like imagine you’re trying to draw like a 3d picture. And you put in your first button and you can only do with dots, you put it in with a handful dots. And you can see like that line of, you know, a guy on a motorcycle so you kind of know what it is. And then you put in a bunch more dots and you get a lot more clarity. You can see more here His facial expression, and you can see that he’s got like a bandage on his leg or whatever. And then you put in more dots, and you get a very clear picture. Now, when you add more dots to the, to the picture to the dataset, there’s no additional clarity, or it’s very minor, the clarity that is added to the situation. And I think this, this kind of metaphor works for, for the, how they’re dealing with the data and the parameters of, you know, GPT, four and beyond. Because, you know, it does a lot of things really well right now. And adding more data doesn’t necessarily improve that it can actually take it back. But also, it’s, you know, F with each new date, you know, as you grow the dataset, let’s say you go from here to here, the it’s a smaller percentage of the total that what you add. And so when you add more and more, it’s just kind of, you know, it’s getting close to like a kind of a baseline and adding something doesn’t really, it’s like a drop in the bucket in the ocean, there’s only so much more, there’s only so, so big that it can get so many conclusions that it can can really be taken from the data at some point. But also, there’s a, there’s a flip side of this, which is that sometimes actually more data can be bad. Because it’s not necessarily just about raw data. It’s also about the right data in processing the data and interpreting the data. So you could actually potentially have a smaller model that’s better than GPT. For that is definitely possible. And I think that they’ll, they’ll get there. So what does he say? He says, that will make it better in other ways. And this shouldn’t come as a surprise to if you’ve been listening to him, I do recommend there’s that Lex Friedman interview is two and a half hours with Sam Walton very to me riveting, hopefully, to you as well, where he kind of alluded to this already. And there’s been a lot of talk about how like, they’re going to run out of data with GPG for GPT, four somewhere in that in that time. And so this is this is not surprising. But there is a big implication here, which is that maybe this takes you know, because they’ve been adding, they’ve been making the model better in other ways than adding data. But the main thrust of where the improvement was coming from was just more data. So this might actually substantially slow down the development of the the AI models, because now they’re gonna have to find new ways to improve it. And it might take another five or 10 years for them to find that new way. Or maybe there’s the GPT four, which is pretty excellent. By the way, it can only make minor changes for quite some time, minor improvements. It is a little bit disconcerting to think that maybe there there’s actually they’re at a wall like now, like it’s already there, that is possibly why he’s saying this, he’s alluding to what’s happening in the company that they’re realizing Holy shit, like this thing isn’t improving we anymore, or it’s improving very marginally for a huge cost, which you’re saying, you know, building GPG, for cost over 100 million. And I think if you listen to how he says it, it’s like, well over 100 million. So you know, the idea of of building GPT, five, and just that much bigger, you know, and cost over a billion dollars, maybe more, if they could even do it. And they might not even be able to do it right now. This also implies to some degree that, that, you know, like when people talk about AGI and the speed of change recently, that might actually slow down quite a bit. And that we might still be quite a bit far out from a super intelligent AI. And just in general, that certain types of like broad can do all things type of AI models. Maybe our app will be after limits for quite some time. The good news about this is I still think that if you’re looking to like be in the AI space, that there is a lot of different opportunity with AI even without this, by doing what I would consider what I think people are calling narrow models, which is basically the use case for them is narrowed down, which means that you need way less data to get a good result because the situations are are much leaner, much, much smaller, and much more controllable. And in that way you can there’s still a lot of room to grow. Because if you had a GPT for size model for something that let’s say was, let’s say an AI surgeon, I don’t know I’m just putting that out there then then that could probably really really freaking amazing way better than GPT four is for any one specific thing. So the conclusion from that is basically that even if open API’s development stalls and we don’t see GPT five for like seven years, that that doesn’t mean that the AI space is like stuck, that there’s not more that can be done. I think it more implies for like some of the big ambitious but broad super AGI type things that they might actually be further away. Because we might need a new technological development, we might need something new to come along something that’s not a transformer or maybe it’s like a next level transformer. Or maybe it’s like another piece of technology that connects into the transformer and supports it and amplifies it or something like that. There’s a lot of different possibilities. But the problem is that they don’t know it. So this strategy has kind of come to end that that’s that’s what he’s saying with
this. When he’s saying it’s come to an end he said it really should say it’s probably taken out of context, it really shouldn’t say that the the strategy of building bigger and bigger models is over for him for opening it. Maybe not for other companies, but because that was the strategy that has gotten them to where they are today. Anyway, thank you for watching. Let me know if you have any comments if you think if you agree with me or disagree with me, I’ll put a link to this in the comments. Like if you liked this video and subscribe for more awesome AI videos. Thanks. Have a great day. Bye


OpenAI Not Working on GPT-5?

Sam Altman, CEO of OpenAI, made some interesting comments recently about GPT-5.

It seems they are being interpreted heavily and it seems to me that some are reading a bit too much into the comments…

…so, I decided to do my own reading into the comments lol.

Approximate Transcription:

Hi, this video is about what Sam Altman said about GPT. Five and what some of the reactions are to it. And then some interpretations on what what it really means for GPT five going forward. So there’s a video and I’ll put a link to the video in this tweet, and then an article. So you can read it all if you want. And watch this quick video where Sam Altman calls into this Lex Friedman’s event. And he says, We are not currently training GPT five, we’re working on doing more things with GPT four. So I watched another video where someone said that this means meant that they’re not working on GPT. Five, that’s not the same thing to me. Because you can work on the algorithm or the model without training it. I think although the the logic, I know that the code for it is supposed to be pretty simple. So maybe there’s not a lot of work to be done there. Or maybe they are kind of working on it by also working on GPD. Four by solving things with GPT. Four, those are solutions that they can take and apply to GPT. Five, I don’t think this pushes the timeline out. I don’t think this is should be interpreted as they’re they’re trying to put a pause on things to let you know to hear keyed the call of those other people from about a month ago. And actually Lex Friedman. I’m sorry. Sam Altman comments on that in this video, he says basically something to the effect of like, hey, they have some valid points. But then there’s some other things like that he thinks that the it’s technically not not not very accurate. I still think we’re on track for like a GPT 4.5, excuse me, sort of COVID GPT 4.5. In in may be late this year, early next year, and then GPT, five, maybe about two years out. That’s based upon just the history obviously, that’s that’s a wild guess. But some people seem to also be interpreting this as either a lie, I don’t think it’s a lie, have found Sam Altman to be extremely straightforward in every single thing. And I’ve watched a lot of his stuff in terms of, you know, just telling it like he sees it and calling it like he sees it and not he’s, you know, not he’s kind of political about it. But he’s not afraid to say I disagree with someone, or this is what we want to do. And so, you know, I think that this isn’t him trying to be like, Oh, we’re not really working on it when they are. But the fact that they’re not training it is isn’t too surprising. It isn’t like big news, and I don’t really doubt him. I’m genuinely curious. What do you think? Do you think he’s, look at this poll right here on Twitter that a lot of people doubt him? Do you doubt him? Do you think it’s true? What do you think in terms of what this means? Because I’m very curious what you what you have to say. Well, anyway, this is a quick update. If you liked this video, like and subscribe for more awesome AI videos. Thanks. Have a great day. Bye.