Unveiling Google PaLM: Revolutionizing AI with Multilingualism

What is the use of Google PaLM?

Google’s PaLM 2 is an innovative language model which is set to revolutionize the AI domain. Its ability to equip all Google products with next-level capabilities including Gmail, Google Docs, and Bard makes it unrivaled. This state-of-the-art model has been structured on the same lines as GPT-4 and possesses commendable chatbot powers along with image analysis, coding, and translation abilities. PaLM 2 shines through its multilingualism, which will bring Bard to over 40 different languages previously unavailable. With textual training data from over 100 languages under its belt, PaLM 2 has also aced advanced language proficiency exams, truly mastering the art of linguistics.

Moreover, this language model boasts professional programming expertise in up to twenty noteworthy languages such as Java, Python, Ruby, and C by utilizing publicly accessible source code datasets for effective skill acquisition. These matchless features are set to redefine what was once thought of as impossible and reposition the boundaries of what can be achieved in the future of AI technology.

Is PaLM available to the public?

Yes, On the 10th of May, 2023, Google made a significant announcement at the I/O developer conference. They revealed the unveiling of their newest large language model (LLM) called PaLM 2. The tech giant stated that this innovative language model would power Google’s updated Bard chat tool, which was engineered to outcompete OpenAI’s ChatGPT. Moreover, PaLM 2 will serve as a foundational model for most of the new artificial intelligence features that Google is introducing today.

The good news is that the much-anticipated PaLM 2 is now readily available to developers via Google’s advanced PaLM API, Firebase, and Colab. However, just like OpenAI, technical details concerning next-gen software remain a mystery to all except those authorized by internal security protocols.

It’s worth noting that PaLM 2’s predecessor had over half a billion parameter models embedded within it. But with no disclosure of such information regarding its successor from Google or OpenAI, rumors and speculation have been rife in recent months regarding possible advancements in both frontiers.

Nevertheless, what has emerged crystal clear from this revelation about PaLM 2 is conclusive evidence of how serious tech companies are inclined toward deep learning models’ developmental progress. With massive infrastructure upgrades like TPU v4s from Google cited as powering this machine learning feat now available publicly; the public can rest assured that further breakthroughs are imminent in an ever-evolving world driven by cutting-edge technology.

How do I access PaLM 2?

To successfully access PaLM 2, all that is required of you is:

  • Personal Google account (not business-related)
  • And Google Bard.

Once granted access through Google Bard, your possibilities with this AI platform become almost infinite. PaLM 2 via Google Bard, enables users to generate natural language responses that correspond directly with the questions or queries submitted by the user typically referred to as prompts. All one must do is input their set of instructions or questions into PaLM 2 and then it will give an artificial intelligence compounded response.

Is Bard using PaLM 2?

It’s been confirmed that Bard is indeed utilizing PaLM 2. This powerful tool has the potential to completely revolutionize the way we interact with computers. While it’s still in the development stages, PaLM 2 has already demonstrated an impressive range of capabilities that were previously believed to be impossible for machines.

As Bard continues to master this advanced technology, we can expect to see even more remarkable feats accomplished. The possibilities are truly boundless when it comes to PaLM 2’s future potential, and we can’t wait to see what it will be capable of in the years to come. This groundbreaking tool represents a major leap forward in computing capabilities, and its impact on the industry is sure to be nothing short of profound.

Is PaLM better than GPT-4?

The technological advancements of GPT-4 and PaLM 2 have revolutionized the world we live in. These AI tools, although distinct in their functionalities, have become significant contributors to the evolution of language processing and information management.

GPT-4, touted as the backbone of information, data, and language on large-scale and mid-scale applications, has emerged as a powerful tool for handling complex data sets. Its analytical capabilities are unmatched when it comes to processing vast amounts of data with accuracy and speed. It provides a seamless integration between natural language processing and machine learning algorithms that make decision-making more efficient.

On the other hand, PaLM 2 has focused its efforts on championing portable and small-scale usages with flexibility and application development. This AI system excels in navigating limited datasets with precise outcomes. Whether it’s text analysis or semantic understanding, PaLM 2 delivers accurate results while remaining portable for easy access anytime, anywhere.

In conclusion, although GPT-4 is ideal for large enterprises requiring extensive data management capabilities with speed and precision; PaLM 2 targets portability with specific outcomes suitable for smaller-scale projects requiring accuracy when handling limited datasets. It’s imperative to recognize the unique strengths both AI systems offer concerning computer engineering regarding big data management or as an alternative aid for students conducting academic research via their devices.

Does PaLM have an API?

Yes, The PaLM 2 model developed by Google, has an Application Programming Interface (API) for those who venture into the exciting field of artificial intelligence. The PaLM API offers its users the opportunity to access and harness the power of this cutting-edge tool through various generative AI applications that can be utilized in different contexts such as content generation, dialog agents, summarization, classification, and more.

No longer will you have to rely on mere human capabilities to perform mundane tasks such as text summarization or even classifying data sets. With the aid of PaLM’s robust abilities and user-friendly API, you can now elevate your performance by building ground-breaking applications that accurately imitate human-like responses with maximum efficiency – all sourced from this large language model.


How Long Does it Take to Install GPT4ALL?

Here’s the answer:

Approximate Transcript:

In this video is about how long it takes to install and set up and use and get using GPT. For All. For all this is supposed to be one of the easiest, if not the easiest, it’s just an uninstaller. Basically, I haven’t done any research other than you know, I think I watched a video on it a couple weeks ago. And I have these two lakes. So let’s see how long this takes to just get it going. And I’m I pause the video a few times, which is why I’m doing the stopwatch just for your convenience, but so that we can keep an honest time of okay, how long does this really take? Alright, let’s go. So I’m not maybe I should read this, but I’m just I’m not going to I mean, because it’s text on a computer that somebody wants me to read therefore I will not read it. There’s just too much I can’t be bothered with five whole sentences. Alright, so it’s installing it on my computer Yes, I definitely read the license
okay. Alright, so I installed it. Let’s see
okay, so hold on, I’m going to pause the video to pull up the folder. Didn’t see that it left a little shortcut on my desktop. Looks like that. So I spent 30 seconds looking for something available models. I don’t know anything about this or will download that to the groovy one. See how long that takes I’ll pause the video considering trying to click on this although it looks like it’s kind of grayed out so I was thinking maybe I could go through the options while this download here. Let’s just see what happens Nope. Okay, so that took about a minute it just got done downloading it and maybe a little bit longer. And to be clear, this is Windows 10 that um, the music on here I’ll pull up I was trying to pull up my computer settings but it didn’t work I already installed Okay, so now we click here. Let’s just see 10 And you’re done. Ninja cowboy the let’s just see how it got since he goes through it without going into any settings just at its base level. And this is a pretty nice computer but uh you know it’s not like the best computer in the world either. Okay, well that’s not very useful so let’s click on the settings here
remember how many threads My computer has my processor it’s a lot Okay, so it took us about three and a half minutes to get to here but I’m not sure how useful this is. But this right knee home like you about basketball like you are you’re just bliss
so very fast to install Okay, here we go. I’m not sure it is getting the like you are Yoda thing but it’s getting some of it was about to say I don’t know how useful it is because it didn’t this this first response is pretty bad. I’m an injury cowboy. How do I do it doesn’t really move forward. But this at least is writing a poem about basketball. Not really doing it Yoda styles but that’s that’s pretty tough one I think it’s also a little a little bit slow here on this, let’s just stop generating because I feel like it’s degenerated enough. I wonder if the any of these other models are better. If I can just click download and then let it do that in the background and then select it later. So is there a new conversation if I just want to start a fresh one? As summarized above, wonderful it’ll actually do that. I’m trying to see if it like reads up their temperature maximum length if it actually goes up so I mean, like start to finish with me spending 30 seconds looking for something that was right For my face about three and a half, four minutes to actually be up and running, which is pretty, pretty great. But trying to decide if it’s actually very useful now if it’s not useful it’s probably very close I think that these models these these models that you’re able to download and install on your computer or have gotten are getting much better very quickly and I feel like you know with the the ability to use GPT for to train this would be this can be grown very quickly to a level on par with GE GPD for it seems like technically that’s against their terms of service but if people are just doing this not to sell it just to give it away, then I don’t really see what they what they can do about it or how they can even prove it. I guess the way in which they train it it might there might be like oh clearly this person is doing a lot of training was shut them down. So it did not get this one. All right, well very easy to install questionable value. They will play with it in a different in a different video and show you kind of like what its pros and cons are. And let me know if you’re interested in that. Please leave a comment below. Let me know and do I do read all those comments and try to respond to most if not all of them. Thank you very much and have a good day.

 


Wolfram Alpha ChatGPT Plugin Demo

Wolfram Alpha is an incredibly useful plugin for ChatGPT.

Since many don’t yet have access to the plugins, I demo’d it in this video:

Approximate Transcript:

This video is a demo of the Wolfram Alpha chat GPT plugin, I’m gonna try to go through all of the things that it says it can do. In this video and kind of show also somebody pointed out that you could click on the thing to see kind of how it’s thinking and processing, I’m going to try to do that as well. There is a little bit of a limitation here, while I have access to all the plugins, there’s two that I don’t the the actual the open AI plugins, the code interpreter plugin would dramatically enhance Wolfram Alpha as we would be able to use use it on our own data on large datasets, like an upload a spreadsheet, or a CSV or something. We can’t do that. So we’re gonna have to kind of play with that a little bit. But let’s work through this here. I’m actually going to do this one at a time. So mathematical calculations, I’m going to ask it some funky calculation. Let’s see what it does. Okay, here’s my through else’s done integrals. So let’s see. Let’s see what it does. Alright, so make sure click on this solve integrate the input response? It’s I can’t really, because it’s it’s calculations or is it steps? The creatively solving for x by finding the roots of the polynomial equation resulting from the blah, blah. I like that it’s kind of showing its work to some degree, it’s telling us what it’s doing. Really Sure. Pause for a second. These routes are expressed, oh, x equals? It’s been a long time since I’ve done integrals. I don’t recognize it’s possible. This is a limitation of kind of a chat thing here. I don’t really recognize the answer. But it seems like it’s giving me multiple answers. Maybe there are multiple x values that solve this. And I don’t really understand that. So maybe somebody who knows a little more up to date with math notation can give that to me. So, but that’s still pretty neat that it can do that. Because, you know, my understanding is that some of these integrals can be extremely complex and hard to solve. Okay, let’s try this as well, data analysis. So obviously, we can’t do the heavy data analysis, because we can’t input that much data. I could probably copy and paste from a spreadsheet, but it’s unclear how much of that would do. We could, you know, I will try that second. But let’s also do visualization. So we’re going to do is graph, please, we always want to be polite to our robot overlords, please graph this equation. Let’s see what it does. Let’s scroll down. And let’s make sure we can see what it’s doing while it’s thinking.
It created an image. And it’s going to show us that image, I think within here, the images look pretty good. Sometimes, I’ve had been able to get it to do even a 3d image, which is pretty neat, a 3d graph. Okay, did it not a bad job of that? Let’s try. Let’s let’s try doing creating a spreadsheet real quick. And then just copying and pasting it into here. Okay, so what I did was I downloaded the last five years of stock data for for Microsoft, let’s actually, let’s just do one of the the listings and let’s graph this. Let’s have a graphic and see if that works. Please graph the data. Let’s see if it was it might not be too much.
Curious if it’s going to put this in for every single input here. Oh, pause while it works through this. And I’ll show you the end result. It might be here a while because it looks like it’s inputting every line one by one. And this is five years worth of data. So you know, five times 365. You know, that’s what like 17 1800 different lines and it’s doing it very slowly. So give me a minute. We’re not sure exactly what happened here because it stopped showing its work here and it gave an error. Indeterminate string starting line comb wonder if it just didn’t like some of the data on 570. It’s just missing. Maybe it didn’t like that. It jumped dates for some reason, I guess I guess over the weekend, they did they just don’t. The stock market isn’t open. And maybe Wolfram Alpha didn’t like that. And let’s see what it said to create the group If Okay, so it only has a little bit of data, and it only created over a very short period of time. That’s interesting. It didn’t like it. I think this will be resolved quite a bit with the code interpreter plugin if I had that. So I’m not super concerned that it that it kind of barfed on the on the data. Well, it had a it had a break here. So it’s unclear why that would be an issue. Who knows why, what does this click on this cloud plugin? Okay, well, page it says page at the top page unavailable. Interesting. Okay, let’s go back up to the top and scientific computations okay, I wonder what the what we can do with this?
How about please calculate the next five dates when Mars will be closest to the earth. actually kind of curious about that
summarized it. Next five Mars opposition dates. I don’t know if that’s the correct term could not understand. Okay. Mars opposition date? So it didn’t understand that. So it’s digging a little bit deeper and trying to figure out what this is nearest opposition to Mars?
I wonder if this is correct. Let’s actually check this answer.
No, that’s it’s opposition dates. So 2020 to 2025 2047 2029 2031 2025 23 720. Okay, well, it looks like even though I don’t have the exact dates. Let’s see what seems about correct and wonder if they’ll give the exact dates? Oh, let me see if I can find the exact dates. This January 2025. Does seem to match up with January 2025. Right here? Well, there does seem to be a little bit of difference in the data. We have January 16 2025, and February 19. Slight differences. So it’s possible that there are disagreements in the CAC I don’t think so that should be a very precise calculation. So I’m kind of curious if this is correct, or that is correct, or if I adopt this, but if there’s some amount of disagreement among that. I do think it would be interesting if I especially if I were still doing like physics calculations, or chemistry or astronomy to actually compute these, I’m not really sure how to compute this exactly. So I don’t have really great ideas on what to do here. So I’m going to move on, I did think that it looks like it, maybe calculate those and didn’t actually just pull the data from somewhere else, which would be pretty useful, I think, in general, unit conversions. Okay, well, somebody told me that it was please convert it was 40 degrees Celsius, degrees Celsius into Fahrenheit. I don’t know how to spell that. I don’t remember how I’m sure that’s wrong
you know, this is something that chat GPT might be able to do. Really well. Fahrenheit. So that is correct. I wonder if I do. I just do it straight up like GPD 454 could do that. Let’s go back here. My spelling of Fahrenheit is very very wrong.
Okay, showed its work, not bad. So maybe not the best example, for conversions. I don’t really think that unit conversions is super useful, maybe in combination with other things just because there’s other things out there that do that very easily. Just go to Google and doodle can do that calculation involving dates times determined time zones and calculate durations between dates. Okay, so let’s have a calculate this. So please tell me how many years blah blah blah until the next leap day?
Oh, it’s actually doing it. Not quite exactly what I was looking for. But so it looks like there’s a Leap Day coming up in 2024. Let’s use Excel to calculate this actually. So we got this right here, we’re just going to do equals that minus that. So 311. Yep. Looks like it calculated correctly. All right, good job. Fact retrieval. This is interesting, famous people. Alright, let’s try something like, what is the population of Rome as of April 24 2023, this might actually involve some calculation. So I’m kind of curious how it’s gonna do this, instead of just pulling the data and might actually have to kind of extrapolate Rome population?
Let’s look up here. Rome is a city.
Hmm, it didn’t. I’m not sure how useful this is. Because it seems like you know, for old data, or for long standing factual data, GPT, four does just fine. And the Google search does just fine. Maybe integrating this into other things might be useful. I wouldn’t exactly say that. Either that or even the real time data, whether all right. Subject to the knowledge cut off. Okay, well, that’s not very helpful. How can you call this real time data? If it’s, you know, a year and a half old that? I don’t quite understand that I’m not gonna even test that because it’s just not accurate. Huh, I wonder if we could look at I got I got an idea. Let’s try this. Please graph the average global temperature by year from 1800 through 2021. Again, this would be this might be something that you would be able to do with the code interpreter plugin in combinations is actually two different requests in once it’s actually grabbing the historical data along with graphing it. So I’m curious to see how it does it. The code evaluation time out
all right. Let’s do this.
Oh, it’s still working? Oh, okay. I don’t need to do that
it kind of solved its own problem. That’s funny. That’s good. That’s that’s a really good time. Seems to have an issue with this. And wonder if we do let’s just do through 1950. See, if we make it a lot smaller if if it gives us a better answer.
Okay, it still seems to have trouble with that. So I’m gonna go ahead and move on. Let’s see. Language is the plug in to analyze language and text provide definition of words and performing linguistic analysis. I’m not sure how useful this is because I feel like GPT four is very strong at this. Very, very strong at this. So I don’t really see any, any like angle here that would be good for that. Unless it like combined data or math to some degree. Again, I don’t really see a way to test this. That would be separate from it’d be better than GPD forward anyway. If you have any ideas let me know and I’ll give it a try symbolic computations. Oh, this is inter First thing let me think of something. Alright, we’re gonna try this please solve for y in terms of x I altered this a little bit so that there’s a y and an X and let’s just see kind of what it does
seems to have interpreted it correctly. Okay, so there was an error and then it tried again it looked like it got it this time three possible expressions I did not do the calculation myself but hopefully those are correct if they are not please leave a comment below for what the correct one was I’m happy that it tried it. And I think that you know, the the ability to use math within the chat actually expands quite a bit on the possibilities of chap GPT because math is part of so much of our world and even just kind of simple math sometimes making sure that we get it exactly correct. Is very useful as it is actually are seemingly ironic on the surface but really not with you know how it works, but it’s a weakness of the large language models while GPT four did get that other one right and I think it is better than GPT three 3.5 at math this is bringing so hopefully this was insightful to you. Let me know if there’s anything else that I missed. I appreciate you still here if you liked this, like this and subscribe so that I can feed my kids. Have a great day. Bye


Demo of All ChatGPT Plugins Available in April 2023

ChatGPT plugins are an amazing new development from OpenAI.

They will dramatically expand ChatGPT’s capabilities rapidly.

If you’re curious what it is like to use the first batch of plugins, watch this video:

Approximate Transcript:

Hi, in this video, I want to go deeper into the plugins I shot another chat GPT plugins video, I didn’t go over all of them, I’m going to try to cover as many of them as I can. Many of my voice can hold up, I might pause to talk for a little bit because I do still have COVID. And, but also some of the plugins I noticed actually were working that weren’t working before specifically the Zapier plugin, and I’m going to show you as many of them as I can. So this one just didn’t couldn’t even figure out how that would work. Let’s go back, let’s do Zapier, we’re going to just uninstall everything so I can show you kind of from scratch. So he wants me to connect. Let me pull this window over. So it looks like I already have a bunch of actions. And that I have on here. But what we can do is let’s just add some new ones. And so right now, there’s still only six actions. I won’t show you all of them. But what I did before was these things didn’t work. I’m not gonna do the slack or the the Gmail was to do the Google Sheets one. I think that’s the more useful one. It still wasn’t working exactly correctly. And I’ll show you what I mean in a minute.
Enable action. Okay, so we got that one. Let’s also do the lookup
just have a I guess all these
Okay, so this should be working. And we’re just gonna close this now. Alright, so now we’ve got it enabled here. And I have a sheet when we pull it up. Okay, so here’s the sheet, Yoda isms. And what it what seems to do is it’ll repeat itself when it’s not supposed to. That’s when it lastly, the last time I did it, which was about a week ago, the first time I did it, it wouldn’t do anything. So what we’re going to do is we’re going to say, so we have wraps is the the I think they call us the sheet. And this is the worksheet, I don’t remember exactly how they name it. But let’s say please add, let’s see, ice cream, toast and pickles to kind of want to test see if it will do it on row six. And or it’ll just do the next empty row
to row six. Actually, here, let’s do
row six in cells in columns
A, B, and C respectively. Your pause while I finish typing this. So actually, I’m going to copy this. And I’m actually going to see if I need this last part because I believe it actually does need that last part. So let’s try this.
Let’s see what it does.
So I was not able to retrieve the information when I did it last time as well. Looks like I need to specify the worksheet. Okay, so you do need to specify that. All right, stop generating. Let’s try this again. See, even though there’s only one sheet within the Google Sheet or I’ll pause
I think it’s going to ask for confirmation. It’s going to show me a preview of the action.
Maybe a safety measure. So it’s weird Ice cream, ice cream, pickles. I could have edited it. But I just wanted to see if I just did it raw. And it didn’t do it on the correct row. It just did the next empty row which is interesting. So it’s better. But because in the past it did. I did three things as well. Excuse me, and it it it did one in all three. It got a little closer this time. But let’s see
what let’s try this. Now let’s try I
Okay, here’s the next next request let’s see if it can retrieve this information due to a variety of special rule being empty or the specific columns not containing any data
yeah that’s not what we’re looking for so it doesn’t it isn’t able to isn’t able to actually pull the information so it’s still not working but again this they haven’t released this publicly yet so I’m sure if they’re still testing and it’s already improved several times as I tried it let’s let’s go into some different plugins
the Wolfram Alpha one I did show that one successfully that that’s probably the best one so far that I’ve found let’s let’s just try this shot one. I can’t remember what what happened with this one. It’s too sharp turn off Zapier.
Let’s try this
I will show you how I was doing this it’s really interesting how it gives you sort of the some of the context there I don’t know how useful this is to be honest, because I wonder if it’s like if it’s actually finding like the best price is $109 million. What made them pick this this site? Over Amazon?
Okay, $540 million
All right. Maybe it’s like a classic shoe or maybe maybe they actually did find like the best one. Alright here’s, I guess this is eBay newish. Me It’s kind of cool. I like this, this image down here. It’s pretty, that’s pretty neat. I don’t see. I don’t see myself shopping this way. It feels like it still needs some more comparison aspects like maybe they pick a shoe. And they find several places where you can buy that shoe and give you several options as opposed to just one option. But it’s interesting. Oh, it’s so running in pause, not sure why this is still running like this because I did some other stuff for a few minutes. And it just seems like it’s done. So we’re just gonna try another one. I looked up some information on this right here. And it looks like they have I showed this last time they have joined the waitlist. It looks like it can’t even test it. So maybe they’re trying to put stuff in there that looks like maybe open I own some of it. Or maybe they just put that on there for fun. Or Y Combinator which I guess is a major thing in Silicon Valley. And there’s more information here. I can’t work I can’t really use it doesn’t see it doesn’t actually do anything as far as I can tell. Let’s try Expedia. I’m gonna actually tried this one yet. So
I would like to
I’m just gonna do this. Please help me plan a trip to Hawaii in July of this year. Always helpful to be polite to the to the robots. Okay, so it didn’t quite notice that. That that didn’t pull from Expedia. But did it ask some questions. So maybe it’s just waiting for the Expedia? What city let’s just do Honolulu
was actually pretty good. wasn’t quite sure. It didn’t have all the information with Expedia. Let’s see it. Let’s see what it does.
And pause. It is interesting that it assumed July 1, July 4, which is kind of weird. I didn’t give it any dates. What I might be able to do is say give it some specific dates it’s going to click through this while it’s finishing that up and see what see what this link does. So it has a location like a place to stay at Norfolk it hasn’t given flight information yet. It did automatically fill out the check in check out that’s kind of nice. That’s actually pretty useful. I think, actually, this is. This seems more useful than a shopping app just because I could see starting a conversation with chat GPT planning a trip, asking the questions about like, or giving it like, this is what I kind of wanted to trip. Where do you think I should go? Blah, blah, blah? And then midway through the conversation doing alright, that finally these things. Now, would that be more is that more efficient than then just doing the research yourself asking the questions in chat GPT or using GPS for for that, and then doing your own Google searches? Or just going straight to Expedia and doing the search? It’s unclear.
Let’s see. So you can also find flights activities, car rentals for your trip, actually.
Okay, so here’s my response. So it did change it to this. But I also wanted to test to see if it could kind of handle multiple options and do kind of comparisons, and least so far doesn’t look like it’s it’s done that when I click through, it does have the correct check in checkout times. I also wonder if there’s a way to limit it to guest rating of nine and above. I think that might be pretty useful. And I want to let it finish to see if maybe it gives additional options here. And then flight info.
Okay, so Alright, so here’s how I’m gonna do I’m gonna
add another message here. Alright. So can you give me only lodging options with a guest rating of 9.4? or greater? Okay, so it acknowledged the guests rating of 9.0 or greater, let’s see if it if it fulfills on that, that request. Okay, so it was successful at keeping all the guests ratings above nine. So that’s good. That’s for flights.
I’d prefer a direct flight. I think that there are some so I’m asking if it has some walls getting that there is a point? I think that’s important here is this seems to take longer than if I were just do a Google search and kind of poke around myself. And you know, each time ask a question, it kind of has to restart as opposed to like, if I was actually on Expedia, I do feel like this process would probably be faster. But that said, that doesn’t mean that there isn’t a way to improve it in the future. And that this is just kind of a starting place for it. Okay, it didn’t quite follow the context of the on this, because actually returned direct flights. And maybe I should ask that again. Let’s try this. Are there any direct round trip flights, and that was faster than the hotel one it generated that let’s click through here
so did look like it did it correctly? These are all one stops. So it didn’t it didn’t filter for non stop. That’s fine. That’s a flight 115. And obviously, the flight numbers on there. Okay, let’s move on to something else. This seems like okay, but not necessarily an improvement over just doing it yourself through expedia.com.
So I guess this is this, we can compare Expedia kayak. Let’s do that. And actually, we’ll do go back here and copy this.
Okay, so here’s our prompt. I added a little bit, a little bit more specifications to it. As I work through it, and ask me the questions. And yeah, this will hopefully give us all the answers we need right out the gate. Now the way that plugin interacts with chat up, some of that is up to 10, GPT. And a bit some a lot of it is up to kayak. So maybe there are options will be better. Maybe it’ll be faster. But notice how it’s just taking time and this is with GPT 3.5 GPT. Four will be even slower if they integrate that into their Okay, so I want to point out I did ask it a lot more in one question than I did with the other one but it’s already been a couple minutes and It’s still not done, it’s still kind of processing this. So I’m gonna pause it for a little bit longer. Interesting that there’s an error in here as well. Okay, it took a while and it does look like maybe this will be forever spinning just like the other one. I liked that it offered. So the the expediate plugin didn’t do as good of a job in understand that I wanted to look at two different timeframes. And also, I feel like the response here is much better, because instead of listing out specific flights, it doesn’t really make as much sense. It. It, it said, Hey, there’s there’s multiple options. And if we click book here, let’s see, let’s see what they if it’s actually filtered for. So it isn’t filtered for nonstop. But But where are the dates? Let’s see, it does have the dates correctly. So still still better, because the other one just was unnecessarily wordy, wordy. All of these do have a nine rating or higher. So that’s good. Let’s double check. The dates are correct. And it does match. Also, the link that I clicked here for the second option is accurate. So this is good. So far, I’m liking the kayak plugin more, I think it’s better. And it’s I feel like it’s more concise. Now maybe that has to do with the fact that I gave it a better question up front. Up in the air, I’m still questioning whether or not this as it stands now is more or less useful than just going to kayak.com in in filtering these things out. It could be useful in the context of a conversation. But it could also just actually add more time, because it does take quite a bit of time for this to run out. And hopefully they’ll speed that up. Alright, let’s try another one. All right, haven’t tried to open let’s let’s actually just uninstall these because I don’t really care. Straight Open Table. So open table selected. So because here’s my prompt, I would like to eat sushi for dinner tonight. And what’s planning? Can you give me some recommendations, please? Do want to unpause. So you can see how slow this is even with GPT 3.5? Because it does it is taking a long time. Right now I would say this is a far inferior experience versus just going to Google and typing in sushi restaurant. They’re not mentioning reviews, maybe I can ask them to give reviews to it, if I click through are actually used open table. So they do have reviews on if we can ask it to list the reviews for all the locations. It also appears to have gotten the location incorrect because it’s doing Singapore.
Singapore Yeah, that’s definitely not correct. Well, so there’s not really a city called West Point. Oh, it’s just playing it was really big. And over on the west side, and I was curious if it would get this right. So let’s try this again. Alright, I’m gonna try this. Yeah, this works. I kind of want the Google Maps location. I really don’t want the OpenTable. Link. I’m not trying to make reservations. Let’s see if it gets it right. Wow. Okay, so this is far superior to the initial response. Let’s see if the Google Maps link actually works.
Did a Google Maps? So didn’t actually find this is not bad. Actually. It’s interesting that it would I search this out it was still not bad. It wasn’t it was the wasn’t a direct link. Let’s see if this one worked. This one also did not work. But it’s still pretty fast. That’s actually not not that bad. It’s convenient. So it looks like there’s multiple locations.
The review count isn’t quite accurate. So 204.5. And wonder if we would need to have Wolfram Alpha integrated for that to get that exactly correct.
Overall, it’s okay. I think the biggest limiting factor here is really going to be the speed. It needs to be a lot faster for it to be useful. And it needs to give more comprehensive data right out of the gate instead of just listing this also it was weird that it did Singapore that doesn’t really make sense that it would pick Singapore I did make it a lot more clear with Plano comma, Texas, just to do a West maybe I could have said the west side of Plano, Texas, that might have gotten a better response. Okay, my voice is running out. So we’re gonna have to make to do just one more. Let’s see. This one didn’t really work.
kind of curious about that.
It’s really weird. Okay, I kind of want to do open Instacart and the shopping one.
Let’s try Instacart.
All right. Okay, here we go. I did add Wolfram Alpha, because it might want to calculate certain things about healthy stuff. So just in case, and let’s see, let’s see how it does with us. Now, it might actually be better to do the meal plan separately in GPT. Four. But we want to make this as easy as possible for the user. And I do think that they will integrate this into GPD. Four, I don’t see any reason why they wouldn’t. Okay, so it actually did a pretty good job. It did get the correct the basic meal plan. I’m not sure how easy or hard these are, I’m not much of a cook. We’ve got one vegetarian, one salmon and three chicken. It didn’t really give me instructions. But that’s fine. It didn’t really ask for instructions. And then it asked for me to add to Instacart. Now I don’t have an Instacart account. So we’ll just kind of see what happens. Maybe I’ll create a free one just to test it. Okay, so actually, I did have an old one. And I think I’m gonna be able to finish all the plugins because the cough drop is working for me. Okay. So I’ve already logged in. Let’s see this link the same looking at the bottom left. Okay, so it’s the same link was clicked. Martin logged in, was not my actual address. So add zero items to cart. Is it still loading? I can’t quite tell. No, it doesn’t appear to be still Oh, there we go. Boom, boom, boom, boom, boom. Okay.
Who used to shop for lease?
Let’s see, let’s find the car. Stop. Okay.
And then this is actually incredibly useful. I would rather it not use Instacart or revenue, something else. Not a, it’s useful, it was very useful during the pandemic. But, you know, if I find it to be pretty wasteful, I’d rather just go to like, Kroger to go, or you know, pick up, that would be less expensive, faster, easier. But that said, this is pretty cool that this actually is, you know, commented on the other ones not being very useful. This is incredibly useful to just add all of this with the right quantities to create a meal plan from scratch. And to just add, if the car does have, I could just buy it right now and have it show up at the door. And boom, that’s very, very useful in already. And I could actually see some improvements on this as well. It could, you know, there could be, you could have your preferences or something already in the app or something, to where like, what kind of stuff you like, or kind of balance with your meals that you want. And it could just be like, Alright, make me a new meal plan. And, and go from there. This is really, really useful and cool. If especially if I was, you know, if I used Instacart on a regular basis, which I it’s been a very long time since I have used it. I would love to see Kroger make an app for this, or you know, maybe all these or something. It was a quarter right next to my house. So that would be more useful. And I think it would be much less expensive, and less wasteful, to do it that way. But you know, that’s this is still really, really cool. Alright, let’s go back here. And fill out we’ll leave Wolfram Alpha in there. Because it seems like sometimes that needs to be integrated. Let’s check this one out. We already did a shopping app. It’s actually go to basketball shoe shopping. Let’s do the same comparison there. So let’s see what this gives me is the first the exact thing I did on the first one, I think it would be better to do like a prompt. And I’ll build the prompts separately. That involves like, compare it you know, automatically comparing multiple places. So don’t just give me a product in one place. Give me a product in three places where I can buy that product at a low price. For Good price or whatever. And that it is interesting that it’s giving me some of these properties. That’s, that’s pretty useful. I don’t remember if the other one did that. Oh, see, but it’s okay. 10.5 is here. I’ll pause, let it finish. Okay, let’s see what it did. And we can also compare the results to this one seems a little weaker, just to be honest. I kind of liked the detail. It does. They didn’t do the pictures, right. So that wasn’t as good. And they also just gave me a bunch of different. Let’s see, it’s, I guess it’s all what is Klarna? So it’s just all okay, so it’s an aggregator. Okay. So this actually, I think, makes more sense. Because instead of linking directly to the different pages, it gives me a comparison so I can actually go on here and find the lowest price. And it looks like it’s 131 Did it list that correctly? 131 Okay, so this is definitely superior. I like this a lot more. And I think this is actually useful again, it’s very, very slow. And yeah, I think this is pretty interesting, and has some potential. Alright, let’s move on. To uninstall this. So this one didn’t work. Also just seems really really not useful at all. Access market leading real time data sets for legal political and regulatory. Okay. I wonder if we can just do
please tell me what fiscal chat
GPT to plug in to. This should be a part of every plugin so far actually. I don’t think that they’ve done this I should go test this on the other plugins as well. Okay, so it’s giving us a good answer. Okay, this is interesting
So, this is really weird. I guess this is like SQL. I don’t know what what this what this is right here. Maybe somebody else knows if you do put it in the comments about the White House calendar this is really interesting. I have to try this on the other ones now. So let’s look at like
please give me
let’s see what this is let’s see let’s see who’s doing their job and who’s not who’s who’s at work and who’s who’s playing hooky.
May be better served with a Google search not really sure exactly what to do with this I do feel like there’s probably some usefulness to it if you’re in the government news this could give you an enable to chat with it I don’t definitely don’t see myself using it. We’re actually going to do is we’re going to go in and we’re going to look to this for everything please tell me what the Wolfram Alpha let’s just make sure that they all can do this. Because they should definitely do this there should be a way to be like I think the Meelo one is the one that did it didn’t do it last time it didn’t it didn’t understand what was going on okay, this is this is this is pretty good
by entities such as countries Oh really? I didn’t know that. Yeah, this is this is this is one of the coolest parts
so you don’t necessarily need Wolfram Alpha to do this unless maybe it gets more up to date information. And maybe, I guess you could have also done this with GPD for Alright, let’s try a different plugin. Let’s go ahead and uninstall this one. Alright, let’s let’s see if they fix this because this this was the worst performing one so far. Okay. What’s it called? Nilo? Famy from the
NEMO
okay, it didn’t do this correctly last time. Again, you know, I noticed this is the In alpha, and they haven’t released this publicly, I have developer access. I tried to get a, a plugin developed in their timeline, they had like a cutoff date of April 11. And I had just gotten access like a week before, put my developers and I have a big team but but my developers on on it as we without trying to drop the other urgent things we had, and couldn’t quite get something delivered in a week. It sounds like it was from a developer’s that it was a little more less straightforward than it seemed to actually create a plugin that worked fully. So good that good on them that they that this is actually working, because it was not working before. Let’s see what I’m going to read this. Okay, what’s magic today?
This isn’t quite what? Oh.
This isn’t quite what the website sold it as so it actually seems pretty different. The website more seemed like to help you manage your family. So shopping lists to do lists schedule, that actually sounds more useful than this, because I just go to GPT four and be like, give me a fun thing I can do with my kids today. So now maybe if this includes that in there, and it’s better than like what GPT four or Google will provide, then maybe that’s useful. Or maybe as part of a greater conversation it could actually be useful but at least it passed that that test we just talked about Alright, let’s go through this process again.
Close action settings to Zapier.
Okay and action called conversation
Okay, that’s pretty good enough. I’m actually just kind of curious if any of them are unable to give a response. Let’s just install them all actually.
Close
this is interesting. I should probably ask this first before I did it before I did anything. It is giving some pretty good detail here and I think some reasonable ideas on how to how to use these things. Expedia, if I can ask both
ah no conflicting that’s funny
it seems like a better type of response something that doesn’t seem like it’s made for coders. It’s something that’s made for actual people to to look at is a good response actually, it’s not too long but not too short. It gives an I like the number of things it makes very easy to read and see what it actually does. Let’s compare that to kayak
Okay, not quite as good as Expedia one is kind of giving it in, like coding this endpoint. That’s a that’s a coding type of reference. It’s still not terrible All right. So far they’ve all been able to do that. That’s good. That lets me do open table and I guess there’s two different ones at the same time.
Okay, this was pretty cool. Save me a couple steps. Okay, so so far they’ve all been able to enter that.
Speak in this will be this speak in Klarna. Quantum shopping. What? Speak to one.
Okay, let’s see what it did here.
Okay, so they all have it now that’s great in, it’s actually pretty useful thing and maybe I can expand it over time. Okay. Okay, that’s all the plugins right now, at least at this point, I know that they’re trying to add some more plugins. They’re still a little bit rough around the edges in some of them. And hopefully you found this helpful. I really didn’t see I haven’t seen any information out there other than just real quick, you know, that original video that opening I shot and really not much else on what what it’s really like on the back end here. And the experience overall, there’s a lot of potential, it’s very slow, and so and wonderful because it doesn’t use the legacy. Remember, if it uses the legacy, it’s possible that it’s because it uses the legacy which is much slower than the turbo. But it’s just very slow. I tried to posit for every single one because you this video would have been three times longer. Otherwise, it’s already a pretty long video. Well, if you’ve made it this far, I appreciate you. Thank you very much. And hopefully, you know if made it this far, subscribe and like and comment and send smoke signals and everything. Thank you very much. Hope you have a great day. Bye


The Implications of Large Language Models (LLMs) Hitting the Wall

Recently, Sam Altman said, “The Age of Giant AI Models is Over.”

What he meant by that was, “our strategy to improve AI models by making them much bigger is providing diminishing returns.”

So, I thought it would be interesting to explore if LLM’s hit the wall and have improvements dramatically slowed.

Approximate Transcript:
Hi, this video is about large language models MLMs hitting the wall and the implications of that. In case you haven’t heard, I shot a separate video about this. But Sam Altman recently stated that the age of giant models is over, which I think is a bit misleading. Basically, what he was saying was, you can’t improve any more by just adding more data and more parameters. And this makes sense. And this is something that some people have predicted was coming, because GPT, four just capture so much of the data. They didn’t release it. But if you look at like the GPT, two had 1.5 billion parameters, which is sort of like the amount of neurons or the amount of different kinds of factors that it considers GPT. Three had 1.7 170 5 billion. We don’t know how many GPT. Four had has, they didn’t release that. But estimates are that it’s a leap over GPT. Three. And that also, that potentially, they’re kind of out of data. Now more data is being created every day. So it’s really they’re out of data completely, but perhaps just there’s not enough to get like that exponential leap. But also, I think he implied and this makes sense that sometimes more data just isn’t necessarily better, doesn’t necessarily give you an a better answer to get more data. And I elaborate on that again, in my in my recent other video. So you know, let’s assume for the sake of argument that that large language models and opening I included, hit a huge wall, and they are maybe not unable to move forward, but their progress has slowed dramatically. And we don’t see anything like what people think maybe GPT, five should be for five or 10 years, that maybe there’s another technological development that needs to happen. So what comes about because of this, let’s look at the good. I think probably the biggest thing is for the world to kind of catch up mentally on unlike, you know, especially when it comes to misinformation being spread, and identifying that and helping people adjust to that new reality that we’re finding ourselves in right now, this year 2023, that’s probably the only good thing I can think of maybe the pause, the people who were in favor of a pause is just kind of happens naturally. I personally don’t think that the pause is a good idea. And you know, there’s three dots here, because I don’t really see a whole lot of good coming from this, I’m sure that there’s plenty of people that will be celebrating this, if this is the case, I will not be one of them. The bad, here’s here’s what I would say with the bad good tech is slow down, there’s a lot of really good use cases that really dramatically can help people’s lives that is coming about because of the AI models. And now maybe this in some cases, this doesn’t affect that in some cases, it likely will. So you know, just to give an example, there’s a bunch of different stuff with regards to health care, you know, saving lives, curing diseases that that AI is actually has already shown to be quite proficient at and moving forward rapidly. So perhaps that slows down to me, that’s bad. I think there’s also an argument to be made for this could actually be better for bad actors. And the reason for that is that I think that opening I’m moving forward will actually help tamp down the bad AI models, as they have demonstrated to me pretty thoroughly, that they do have good intentions. And that if there was a bad model that that GPT for GPT, five could help kind of tamp down, identify, fight back against that they would work on that and help with that. And so I think that this actually opens the door for bad actors. And it’ll it’ll make sense when I get to this last bullet point. Let’s look at like, kind of, like how good is GPT for right now. And I would say that it’s really freakin good. Like, I was trying to test the other day like, you know, it’s supposed to be bad at math. And it actually did a pretty good job of math and showing its work. And it got it right. Not like a super complicated thing. But more complicated than what you know, other people were saying it was, it was it was wrong. And I need to add the hallucinations here. So but there are still some things that it struggles with math, as we mentioned before recent events, hallucinations, I think that there’s some more if you want put put them in the in the comments below if you have any other ideas, but it still struggles with some things, but not a whole lot. It does a whole lot really, really, really well. So you know, I think right now, it’s actually at a point that is pretty profound, just GPT four as it is now. Now. So Sam Altman did state that there are other ways in which they are looking to improve it, and I believe I believe them. And but maybe it’s just slower. Let’s assume for the case of this argument that it’s slower. It’s just kind of more minor updates that come together more further down the line in terms of years to create a more complex hints of bigger change, which is kind of what they said, they did say that a lot of their improvements, were just a bunch of little ones that kind of worked all work together, or to create where a whole is greater than the sum of the parts. How much can it really improve? Now, the plugins, the chat GPT, plugins, actually does have a lot of potential to shore up the weaknesses. Specifically, I shot a video on Wolfram Alpha and math, if you know those things work well together, and it worked pretty good, then then that’s a huge weakness, recent events, there is some way around this, to connect it to the internet, to some degree, where to pull information from the Internet, put it in your own database, that’s, that’s very recent, I do think that that will be helpful, I’m not sure about the hallucinations. Whether or not like plugins are really probably not going to really help with that. This is probably one of their biggest challenges the hallucinations. And it’s a it’s a real, it’s a real issue that needs that reduces the value of GPT. Four. So I mentioned that they’re working on it pretty hard. And I’m optimistic that they’ll be able to solve it. But you know, who knows, it might take five years before they’re like, alright, we’re, you know, it rarely, if ever gives hallucinations. Longer context windows, you know, the, so they increased from GPT, three to GPT, for technically, the context window by quite a bit, at least, like the maximum of 32,000 tokens versus I believe it was 4000 tokens. So that’s an 8x increase, which is an order of magnitude that’s pretty substantial. You know, I don’t think that this is really necessary. I know that some people were like, oh, you know, I could, if we had even more with 32,000 tokens, I think that’s like 5200 pages of content. You know, if you had even more, you could just put a whole book in there. But the problem with that is that more data is not necessarily better. And I think you get diminishing returns, and you you kind of watered down the things that you want to see if you have these huge context windows, and you dump massive amounts of data in there. So I don’t necessarily think this is a big improvement, I think the context window right now is quite large. And there’s always going to be a limit to it. And so, you know, this is a problem that developers and people are going to have to deal with that is being worked on. And I think there are there are solutions for it. Recent data, I think this is something that a plugin would be able to help with significantly. And I do think that there’s, well, I mean, they seem very resistant to adding a new dataset, maybe it’s because they spent so much time and money and energy training GPT four on that data set that they had, and recreating that dataset and retraining it again, might cost them, you know, hundreds of millions of dollars. So it’s possible that they’ll just kind of look to Band Aid it with plugins. And but that still is to me kind of a band aid, and maybe that there’s some way, you know, being does have sort of like use your use current search results, I think that’s helpful. And potentially, I think that they have in jeopardy for also something where I can actually go to a website and use that as a reference. So that’s really helpful. Again, I think it’s a little bit of a bandaid. So, but I don’t think this is a huge issue, because I think that, you know, there’s you just just knowing this limitation means that you can use GPT for just fine for pretty much all use cases, almost all use cases, you know, barring the one, to me the biggest issue, which is the hallucinations. Alright, so the biggest area of opportunity for AI, even with this, even if, even if GPT four is the exact same level of quality five years from now, there’s still a crapload of opportunity. For, I would say it’s, it’s it’s necessarily business opportunities, just kind of human opportunity, although I think the context of business makes a lot of sense. And it’s what are called as narrow AI models. And these are models that are made for specific situations, I see a lot of models out there that are broad general models that that are trying to, you know, be AGI they’re trying to be a generalized intelligence. And well, that’s great work. And that’s really, really helpful. I think that there’s just so much value you can get, by narrowing the focus of a model of an AI model to a specific use case or set of use cases that target a specific market. It’s way less expensive to train, you can get higher quality results for way less parameters. And so you know, I think also you can even take consider taking these narrow models and making them large language model size and the quality that you might get from that I’m not sure you really need to do that in a lot of cases. But these narrow models are going to, I think shine over the next two or three years regardless of what happens with open AI and GPT for GPT five.
I mean, there’s so many use cases still to be developed. Think about how many different pieces of software are being used right now. And every single you know, that’s the amount of quantity of possibilities for narrow AI models at least and potentially more, because there’s so many different use cases. And there’s so many different ways, especially with the the low cost based models that you can take right now that are being published out there that are open source that cost $500 $600 to build and train maybe even less and still be really really good as a general models, you take that you kind of train it a bit more for your narrow specific case. And boom, you have a very, very powerful model for a very specific use case. I think this is where a lot of the AI investments should go and and I think that the people who do that are going to be rewarded greatly. I plan on doing that and more on that another time. So thank you for watching. If you liked this, please like and subscribe for more videos and have a great day. Bye


“The Age of Giant AI Models is Already Over” says Sam Altman, CEO of OpenAI

This statement by Sam Altman is provocative…

…there seems to be an implication that giant AI models are no longer useful…

…but this is not what Sam means.

Approximate Transcript:

Hi, this video is about something that sounds really profound that Sam Altman said, recently, the open ai ai CEO, he said that the age of giant AI models is already over. I think this statement is taken out of context is a bit misleading, because to me, and I saw a smaller, kind of a smaller headline that I clicked on that made it seem even more salacious is kind of is he saying that it’s just like, chat tivities done?
Like, it’s not good anymore? That’s not what he’s saying. That’s kind of what I would my first reading of it.
It’s like, oh, we’re not going to use them anymore. No, he’s, they’re going to use the large language models. What he really means by this is, is that they can’t make they can’t really grow the improvement of them by making them bigger. That’s, that’s the short answer, there’s a little bit more context I want to add as well, which is that this is this has been the philosophy of open AI, from the beginning. And for quite some time, there’s a, you know, there. ndarray, Carpathia, very famous in the AI world, I believe he was the head of AI at Tesla. And then I think he’s actually at open AI. And now I remember, I’ve watched several of his videos, and one of the things that he talked about, was that, number one, the code for these AI models over the last basically, since 2017, when Google released their transformers paper, the code is very short. And it really hasn’t changed a whole lot. It’s like, I think 500 lines, which for code is very, very small. And then he talks about sort of like the, I believe it was him that the the strategy, the way to improve it is just make it bigger, you know, just keep making it bigger, add more parameters. And parameters are sort of like neurons. And to give context that they show it here in this article of GPT. Two had 1.5 billion parameters. This is funny tag line to be generated by artificial intelligence, I wonder if this is like an AI movie, or series about AI? Anyway, 1.5 billion, and then GPT, three 1.7 5 billion parameters, and it made it way, way better. And that was a large reason for the improvement. And then GPT, four, they didn’t announce how many parameters there already it but it’s supposed to be much bigger. And so what he’s saying is by adding more parameters or neurons, it’s not going to improve the model, there’s diminishing returns in that area. And up to some point, this is going to not give you more, I think another way of looking at this is also more data, it doesn’t necessarily add improvements to the quality of, of the of the model, but just in general, from a standpoint of like data analysis, more data isn’t always better, doesn’t always improve things. And, you know, just real quick aside, if you think well, why should I believe you about data, basically, for the last 20 years, data has, I’ve done data from a theoretical and from a practical standpoint, you know, I have a master’s degree in Industrial Engineering, which is closer to actually data science than it is than it is engineering. And it’s worth a lot of lots of statistics and analysis of huge, weird datasets. And then, you know, I worked at a semiconductor factory where there was, there’s a lot of complicated data, you know, spreadsheets with 10s, of 1000s of rows and dozens of columns. And, and I’ve worked there for about six years. And then for the last 11 years, I’ve done SEO, which is another kind of like practical data analysis, this is very different than the semiconductor, but still, more data. So I’ve been studying data, it’s been my jam for a very long time. And it makes sense, sometimes more data doesn’t add a clearer picture to the situation. And so this in they have talked about actually, this shouldn’t come as a surprise, even though the headline is kind of like, whoa, this shouldn’t come as a surprise, because this has been talked about for a while that number one, they’re going to run out of data to crawl. And that’s not entirely accurate. Because more data is being created every day, more and more in that rate of increase, that rate of new data is increasing over time. But it certainly hasn’t been increasing at the rate at which they have increased their models. But additionally, it doesn’t necessarily help again, help kind of clarify the situation. I think I’ve got a reasonable analogy. It’s sort of like imagine you’re trying to draw like a 3d picture. And you put in your first button and you can only do with dots, you put it in with a handful dots. And you can see like that line of, you know, a guy on a motorcycle so you kind of know what it is. And then you put in a bunch more dots and you get a lot more clarity. You can see more here His facial expression, and you can see that he’s got like a bandage on his leg or whatever. And then you put in more dots, and you get a very clear picture. Now, when you add more dots to the, to the picture to the dataset, there’s no additional clarity, or it’s very minor, the clarity that is added to the situation. And I think this, this kind of metaphor works for, for the, how they’re dealing with the data and the parameters of, you know, GPT, four and beyond. Because, you know, it does a lot of things really well right now. And adding more data doesn’t necessarily improve that it can actually take it back. But also, it’s, you know, F with each new date, you know, as you grow the dataset, let’s say you go from here to here, the it’s a smaller percentage of the total that what you add. And so when you add more and more, it’s just kind of, you know, it’s getting close to like a kind of a baseline and adding something doesn’t really, it’s like a drop in the bucket in the ocean, there’s only so much more, there’s only so, so big that it can get so many conclusions that it can can really be taken from the data at some point. But also, there’s a, there’s a flip side of this, which is that sometimes actually more data can be bad. Because it’s not necessarily just about raw data. It’s also about the right data in processing the data and interpreting the data. So you could actually potentially have a smaller model that’s better than GPT. For that is definitely possible. And I think that they’ll, they’ll get there. So what does he say? He says, that will make it better in other ways. And this shouldn’t come as a surprise to if you’ve been listening to him, I do recommend there’s that Lex Friedman interview is two and a half hours with Sam Walton very to me riveting, hopefully, to you as well, where he kind of alluded to this already. And there’s been a lot of talk about how like, they’re going to run out of data with GPG for GPT, four somewhere in that in that time. And so this is this is not surprising. But there is a big implication here, which is that maybe this takes you know, because they’ve been adding, they’ve been making the model better in other ways than adding data. But the main thrust of where the improvement was coming from was just more data. So this might actually substantially slow down the development of the the AI models, because now they’re gonna have to find new ways to improve it. And it might take another five or 10 years for them to find that new way. Or maybe there’s the GPT four, which is pretty excellent. By the way, it can only make minor changes for quite some time, minor improvements. It is a little bit disconcerting to think that maybe there there’s actually they’re at a wall like now, like it’s already there, that is possibly why he’s saying this, he’s alluding to what’s happening in the company that they’re realizing Holy shit, like this thing isn’t improving we anymore, or it’s improving very marginally for a huge cost, which you’re saying, you know, building GPG, for cost over 100 million. And I think if you listen to how he says it, it’s like, well over 100 million. So you know, the idea of of building GPT, five, and just that much bigger, you know, and cost over a billion dollars, maybe more, if they could even do it. And they might not even be able to do it right now. This also implies to some degree that, that, you know, like when people talk about AGI and the speed of change recently, that might actually slow down quite a bit. And that we might still be quite a bit far out from a super intelligent AI. And just in general, that certain types of like broad can do all things type of AI models. Maybe our app will be after limits for quite some time. The good news about this is I still think that if you’re looking to like be in the AI space, that there is a lot of different opportunity with AI even without this, by doing what I would consider what I think people are calling narrow models, which is basically the use case for them is narrowed down, which means that you need way less data to get a good result because the situations are are much leaner, much, much smaller, and much more controllable. And in that way you can there’s still a lot of room to grow. Because if you had a GPT for size model for something that let’s say was, let’s say an AI surgeon, I don’t know I’m just putting that out there then then that could probably really really freaking amazing way better than GPT four is for any one specific thing. So the conclusion from that is basically that even if open API’s development stalls and we don’t see GPT five for like seven years, that that doesn’t mean that the AI space is like stuck, that there’s not more that can be done. I think it more implies for like some of the big ambitious but broad super AGI type things that they might actually be further away. Because we might need a new technological development, we might need something new to come along something that’s not a transformer or maybe it’s like a next level transformer. Or maybe it’s like another piece of technology that connects into the transformer and supports it and amplifies it or something like that. There’s a lot of different possibilities. But the problem is that they don’t know it. So this strategy has kind of come to end that that’s that’s what he’s saying with
this. When he’s saying it’s come to an end he said it really should say it’s probably taken out of context, it really shouldn’t say that the the strategy of building bigger and bigger models is over for him for opening it. Maybe not for other companies, but because that was the strategy that has gotten them to where they are today. Anyway, thank you for watching. Let me know if you have any comments if you think if you agree with me or disagree with me, I’ll put a link to this in the comments. Like if you liked this video and subscribe for more awesome AI videos. Thanks. Have a great day. Bye


Cool New Midjourney Feature

Midjourney is the best image AI model as of April 2023.  It produces, hands down, the most photo realistic images.

The usability of Midjourney…leaves a lot of room for improvement.

“Permutations” is a big improvement that makes it much faster for the user to generate a lot of different images with one command.

Check it out:

Approximate Transcript:

Hi, this video is about a really cool new feature in mid journey that is really awesome. And I’ve started using the journey a lot more recently, you might notice in like the thumbnails, I almost always start with, with mid journey. And almost all the graphics I use in here mid journey. And usually when I go in to do that, it makes sense that I kind of want to create different variations, I have a few different ideas for how things can work out or what different types of features that I want. And maybe I wanted this, I want to look at two or three different styles. And it just creates a lot of extra work when you’re trying to do a bunch. So now what we’ve got here is if we look at this, within each curly brackets are the different options, when you separate them by commas is actually quite a few different options. We have four here, we have four here, and we have four here. So this will be four cubed. So this should be 64 different options. So if I hit Enter, have too many prompts. The limit is 40. Okay, well news to me. So let’s see, what we’ll do is we’ll copy in here, and we’ll take away let’s take away robots. And we’ll take away one more of a nature theme here. Okay, so that should get us under the There we go. So we Yes, and it’ll just start firing them out. Now I have the I have the highest version of of a mid journey. And so I think it happens a lot faster than for some people but uh, this is a really, really cool feature to just get a bunch of different variety in your art really, really quickly. And then see which ones work the best for you really, really awesome feature. I always love when software companies invest in making things faster and more efficient, more productive for their users. So thank you very much mid journey for this. And if you’re already using the journey, definitely start using this and I’m sure that you’ll find it immediately very useful and very simple to use. So thank you very much like this video if you liked it, subscribe if you want more awesome AI content. Hope you have a great day. Bye


Artificial Intelligence Business Opportunities

The latest developments in artificial intelligence has created countless new business opportunities.

In the video below, I explore some of the angles and vectors that I think are getting overlooked in this field.

Approximate Transcript:

This video is about AI business opportunities. Basically, it’s obvious to many people, maybe some people less obvious, but to a lot of people AI where we’re at with AI right now what has just happened in just the last year or two, and what is likely to happen in the next year or two, there are major, major new opportunities for people to either start new businesses or add on to their addition their current business or to make themselves and it’s in a lot of ways a lot of these things apply to people who work for other people, but, but want to continue to be valuable to their company or to other companies that go forward. I mean, there’s really a massive abundance of opportunities here. If you’re setting out to create a new business or add part at something to your business, it’s actually more about narrowing your focus than it is about like, is there enough opportunity out there, there’s just so much this AI is going to change a lot because going to touch every single industry very, very quickly, some industries sooner and faster and more initially, but ultimately, it will touch a lot, a lot of things. This is not like Bitcoin, this is not like blockchain. This is not even like a like even the cell phone opportunities. There’s so much more, I think it’s on par with the internet, Bill Gates said that there’s only one thing that he that was transformative that he’s seen in his 40 years. And that was actually the graphical user interface. He didn’t even mention the internet. You know, to be clear, like all of these things needed to happen in order for this to work. But here we are, and AI is really, really, really amazing. So you know, I mean, you kind of do your research. And here’s one of my major recommendations, you know, do a little bit of research, do some research, and then pick a path and pick a good to excellent path, and stick to it and just just hit it and stay focused, you’ll hit a wall, keep going and stick with it to some degree. Now, that doesn’t mean you don’t pivot, but it just means that you keep you don’t stop completely or completely abandon everything with the first hurdle that you see. And also this is about not picking, not searching for the perfect opportunity. Because you don’t need the perfect opportunity. Also, it just puts you on an endless quest of searching, or all you’re doing is searching, searching, searching, searching and researching, researching, researching, you don’t actually do anything and you miss the opportunity. Also, it means what a lot of times what happens with this kind of thing is you start one thing, you go halfway into it, and then you see what you think is a better opportunity, you stop that thing. And the first thing and then you go to the second one, and then you rinse and repeat. And you end up after a years with multiple half made business opportunities. So definitely don’t don’t recommend that first area, I would say is with software development. There are new software capabilities that have come about just in the last couple of years by calling the AI API’s, for example, text summarization, you couldn’t really do this in any sort of efficient way, unless maybe you have like a really specific set of niche specific text. Yet AI can do this very inexpensively. And and this is actually extremely valuable. And something that you can do within software that actually is can is very useful. And there’s a host of other things like this, that that the software can do now that it couldn’t do before, in some cases, even just a year ago, or like more precisely in some cases, like it could do it a year or two ago, but the solution was really bad or really expensive. And that that has improved dramatically since then. So it’s more practical, you effectively couldn’t do it. But theoretically, you could do it. That’s kind of what I’m talking about. Also, I think that there’s some room, a lot of room for what I would call AI winter independent solutions. So this is sort of like if openeye wins. If Google wins if and video wins, if some other new company that we’ve never heard of yet wins the AI war, your solution still works. And this is one way to go about it to where, you know, you’re not dependent too much on a single provider that said, like, you know, you could be using open eyes API and then switch to somebody else’s API in the future, should you need to do that. So that’s not necessarily like, just because you’re kind of dependent on one. But this is more about talking about like, tools that use a variety of different types of API’s. But that could also allow users to pick between them, between different ones sort of will make more sense if you go down that path. And then also there’s a lot of niche or use case specific solutions. So you know, I mentioned in another video about like AGI versus AGI isn’t and I think in a few other videos, you know, they’re they’re just because there’s a super intelligent, artificial general intelligence. That is more are intelligent 1000 times more intelligent than the smartest human being ever, does not mean that it will do everything, and does not mean it will even do do everything well, and that there’s a lot of room, especially for smaller competitors to come in. So, pick a specific line, pick a specific set, and really focus on on that specific set on solving that specific problem really, really well, and catering to that you might think well, but I want, you know, I want more, I want to have more opportunity. That’s not really how it works. In business, you know, you can’t be all things to all people. And so trying to do so is a recipe for failure. And so I think kind of the more that you can narrow this down, the more, the more value, you’re gonna be able to build faster. And the the more value ultimately that that you’re able to provide, there’s tons of 100 million dollar plus companies that are extremely narrowly focused. And in an AI, there’s gonna be lots of opportunity for that. I don’t have like a list of all the tools, there’s just so many of a few tools here. But I think like you integrating tools into your business, or seeing how a tool makes a certain business process, like 100 times more efficient or 100 times better, that can lead to whole new businesses that are kind of like classic businesses. But with a with a fresh face, so to speak, like chat GPT and the plugins in their mid journey. I mean, you could have whole art stuff based upon modernity, it’s so good. version five is really, really good, GitHub copilot, you know, integrating that into your software development. And there’s so many more, and there’s so many coming out like every day. I think another thing here, it’s a little bit less about like the business opportunities, although it does kind of open up business opportunities for you. But also, this is good. If you have like a job, which is a commitment to skill building right now, I think that’s one of the best things so that you can be aware of and understand these tools. And so that you can know what really is possible with these tools. You know, even if you have a job right now, there still could be some really great business opportunities in following kind of what I’m suggesting here, can also make you a lot more valuable to your company. So if you think about, like, let’s say AI is going to reduce jobs by 50%, which there’s all sorts of rough estimates like that by whatever your who is going to go first. And who is going to be left who’s going to be left are the people who understand their industry and how it integrates into AI. So if you start studying AI, but also study how and think about and brainstorm how it affects your industry, or your specific company, or whatever, that’s going to make you way more valuable. And it’s going to potentially give you like if you if you come up with a really good idea, and your company that you’re working for in your industry is like I don’t want we don’t want to do that, then you can be like See you later. And then you can go do your own thing. So you can kind of learn and think and process and understand on their dime. And then if they don’t want to, to reward you for something that would be really beneficial to them, then go do your own thing. You know, I think that just kind of understanding AI in general. And kind of how it integrates into your industry is going to be extremely valuable, you know, just you know, built even if you just take AI completely out of the picture, trying to be again, like the the fit the best 50% are going to be left. And so if you’re a top performer in your industry, then it makes you a lot stickier. And it’ll make it a lot easier for you to understand how AI can integrate into there. And then there’s going to be industry specific AI solutions, you can keep an eye out for a big problem in your industry that you think that AI the generative AI can fix. That’s going to be a good way to go. I will quickly allude to my first AI project that’s coming out very shortly, sometime in April 2023. I hope it will ship the first the first version of it. And it’s sort of like an AI winter independent solution. And let’s see if I could read the the USP of it, or at least my current working one. This is for frequent chat GPT users AI enthusiast and prompt engineers. The software will enable an organized, scalable, searchable and systematic approach to managing a model inputs and outputs in text, image and audio all in one conversation. Amplify and streamline your work by discovering organizing, filtering, searching, sharing and building your prompts more effectively invite your friends and colleagues to join in on your AI conversations and art, easily create templates for prompts to build off of and quickly refer back to later. So this is sort of like a prompt engineering tool. That should be ready soon. I’ll show if you’re interested, let me know. Leave me a message or shoot me a message or whatever. And I’ll be releasing some specifics on this really soon. I think it’d be really helpful. The few people that I have shown it to are very excited to get their hands on it. Because I think it saves a lot of time and actually makes you able to do more. And there’s some new capabilities that I haven’t seen in any tools. It’s a pretty unique tool. I’ve
been really looking around and I don’t see anything quite like it. And so and so I’m very excited about that. And it’s an opportunity that based upon the same principles I used above have built so thank you very much if you liked this video, like like it and leave a comment below please if you have any questions or thoughts, and subscribe, have a great day. Bye


AI Bullshit vs AI Reality

There’s a lot of BS & hype out there right now about AI.

In this video I attempt to cut through the BS and identify the reality.

Approximate Transcript:

Hi, this video is about AI BS versus AI reality. There’s actually this really, I thought really well done video by this comedian, I can’t remember his name. But this is what he looks like that I thought was actually really thoughtful and really well done a lot of ways. And he had a lot of points about things that I think were were practical and realistic. But that also sometimes, I think, missed the point and missed the conclusion. And so I thought it would be useful to shoot a separate video talking about some of the stuff that is BS, because he brings up a lot of valid points. And I do recommend you watch it, especially if you’re feeling too high on AI, and you think it’s like the best thing ever, when a kid is going to be able to do everything. And you also, you know, find yourself wondering, like, is this really AI? And this is a valid point that he brings up, you know, he says it’s a real field of computer science. But a lot of what we’re seeing right now is AI marketing, where people just slap ai ai on things. And I think I remember I saw a picture a political cartoon where basically somebody had named a book AI about AI, and then the company paid for it because it had AI in there. It’s a really funny joke about like, AI, pizza, AI, volume Nam, stuff like that. That is just sort of like hey, just trying to market things and say it because it’s hot right now. This is any kind of compares it to like crypto in the metaverse. So I’d say the metaverse is dead. Crypto, and they made over promises. I don’t know if it really if there’s really a potential there for it to do is nearly as much value as some of the people that talked about it. Additionally, you know, there’s a lot of jokes going around. There’s a lot of people who were super into crypto that are now super into their crypto experts, and now they’re AI experts. And that’s not me, I never really was that interested in crypto. I did have some Bitcoins that I sold at about between 42 and $45,000 when it was at that point, and I was never really a super big believer in crypto, just because I didn’t really it just wasn’t obvious to me. That doesn’t mean that there isn’t there. It wasn’t obvious to me like the kind of the broad applications for it. It did seem like it seemed like there was some potential with the blockchain. But there’s still some major issues with the blockchain. So I think crypto being people being disappointed in it is accurate. And the metaverse I don’t even like immediately dismissed. I don’t even really know what really what the objective was there. I thought that was even dumber. And I remember hearing something like Disney had like a team of 20 or 50 people who were just for the metaverse, which is kind of crazy to me that they laid off as the metaverse basically is dead. And that he talks about like an AI D DJ. I like his term AI tech bros, which I’m not sure if I count is that probably not the game a little bit more reasonable. I’m not trying to say AI does everything it may be I do sometimes say that just to be joking. With the 42 robots thing, it’s a reference to, to Hitchhiker’s Guide to the Galaxy, among other things, and we’re, you know, AI is the answer to life and everything. And to some degree, it has the potential to do that. But, you know, I think he also acknowledges that there, there are some genuine benefits for it. So I’m going to go into some a lot of his criticisms of AI in specifics and point out where I think he’s wrong and where I think he’s right or, or maybe something a lot of cases just kind of half, right. I still don’t like this, he’s really funny. So at a minimum, you’ll, you should be entertained. He talks about full self driving, this is something he jumps into pretty heavy. And I think he’s right and wrong about this. You know, Elon Musk has been saying full self driving cars coming in a year since 2014. True, he did finally deliver on it. Air quotes, I have it in my car, I shot a separate video on it, as well. So you can check that out on the channel. But he did kind of deliver on it in 2022. Towards the end of the year, it whether or not it’s full self driving, and I would say no, I do know some people, I have a model three, and someone has a Model S who says it’s way, way better than what I’m experiencing. So maybe the Model S is better. It’s also a newer car. So maybe there’s slightly better hardware or something in it that makes his better. But I want to point out something here, which is that just because it’s can’t be solved now, doesn’t mean it can’t be solved. And this is really important to understand, because it seems to be an assumption that I hear over and over again, not just for self driving, but for all sorts of things. And there’s a little bit of a theme and a few other places like this was envisioned like the Wright brothers. Where we started off at was so far from you had these old timey pictures of these guys like flapping their wings, and thinking that that’s gonna help them fly and I bet the people at the time thought What idiots Of course, we can’t fly. Humans can’t fly. That’s dumb. We can’t fly now therefore can’t find in the future. It was one of those things that wasn’t solved until it was and I do feel like Tesla’s pretty close. I think they have a lot of things. Right? But there’s a lot of edge cases and there’s there’s definitely a lot of things wrong need to be fixed. I’ve always thought for 20 plus years that this is actually a solvable a very complex problem, but very solvable. Just because of the the nature of, of the way driving works, I feel like it is something that can be figured out. He says it was a lie. And it was always a lie. And it’s something like that a robo taxi is sci fi. I definitely disagree with that. I don’t think Elon was lying either. I don’t think he was doing it. You know, I don’t know Elon, personally, but I don’t think he was doing it with the intent to be like, Hey, let’s let’s sell a bunch of Tesla’s and I really don’t think it can happen. I think he believed it. I think he believed we’re going to solve this. And we’re very close for a very long time. He’s felt that way. And so I do, you know, it’s, you know, that I think that’s not fair to say that he just has been lying this whole time. It is, you know, kind of frustrating, you know, as a Tesla owner, who bought full self driving four years ago, to just get it now and have it have issues. Yeah, it’s not, especially if you have motion sickness as I do, don’t recommend it. And maybe with the Model S, or maybe with a newer version, that is better, because my friend did also say they, his wife has a model three. And he she he she he said that that one is actually still pretty good, too. But I believe it’s also newer than mine. So, you know, just saying it doesn’t work now, therefore it will work in the future is, is a very bad argument. You know, he also brings up something like, oh, 10 people were killed in four months without using full self driving, therefore, should all be shut down. Whoa, whoa, whoa, whoa, whoa, this is a huge mistake, that is that politicians do all the time, and all sorts of other people do all the time, which they give naked numbers. Okay, well, 10 people were killed. Well, we need to see how many cars with that how many how many miles with extra drive, and per mile is that less or more than humans driving? My guess is that it’s actually quite a bit less, and not just a little bit less, like a lot less. So this is a very disingenuous argument that, oh, 10 people were killed, of course, 10 people were killed. But you know, how many people are killed with human drivers every year, we’re not saying we should take all the cars off the road. That’s a really, really bad argument. So this, you know, I still believe that full self driving will come I don’t know when. And I think Tesla’s probably going to be the first to do it. But it’s hard to say, especially with Elon being kind of pulled down by by Twitter. You know, this, you know, I think this is one of his worst arguments here, which is that full self driving doesn’t work now, therefore, it sucks. Spam. This is a fair criticism. I actually know, you know, I’ve been in the SEO world for a long time, and I know, varying levels of spammers. And this is definitely already happening. And it’s actually been happening for years now. Ai content in the SEO world has been around 234 years at least, and actually been pretty good. Even before chat GPT. So I do have some experience with this, you know, it wasn’t great. And it’s definitely going to get better in terms of better for the spammers. And this is probably one of the hardest things to solve. I don’t really know if this is solvable. Especially, you know, some people talk about oh, you know, opening AI is going to put a watermark on the text. Now, they’re not they can’t do that. That’s not really feasible. Like, there’s just not really a way around that. That’s just not I don’t think that’s really possible. I think Sam Altman even said that it’s not really feasible. Like, don’t don’t quote me on that, but there’s somebody, some computer scientists, and I think somebody at open AI said that, yeah, you can’t really do that. Maybe with the images, it’s possible to put like a watermark on it. But with a text, I don’t think it’s there. So, you know, the fully expect that this is going to happen. And actually, I don’t really have a solution for it, it’s, it’s kind of unfortunate. Actually, a little bit of a solution, potentially, which is maybe that there’s actually AI out there that helps find AI, and sniff it out. And so maybe there is something that can actually do that. But I don’t really see a full solution, I just kind of have some ideas, hallucinations, this is a very real criticism as well, where he talks about, like, oh, it gave bad information, this is something that they’re very aware of. And I think it got better, not, not all the way better, but improved from GPT, three to GPT. Four. And I think this is something that should be solvable. And that, you know, maybe by GPT, five, it’s like mostly, if not completely stamped out. But you know, it also kind of reflects some how the how the software actually works and what you can actually use it for. You know, I think just being aware of hallucinations, and not looking to at least not right now and not looking to check GPT for like facts or data to actually give you the proper data from like it’s training corpus of data is just this is this is the way it’s soluble, at least right now.
You just don’t use it for that situation. Don’t trust facts that it gives you because it will give you wrong facts right now. It will Will hallucinate there are there is a little bit of prompt engineering you can do you can say, if you you’re not sure, I’m not, you know, 99, you’re not very sure. And your answer, say, I don’t. Or something like that something to this effect. Now, there’s kind of a downside of this, which is that maybe it has the right answer, but it’s not completely sure. So it still says, I don’t know. So this reduces the functionality, but it also could it reduce the chances of hallucinations? You know, he talks about search engines, and how like, chat is not a search engine, like a classic search engine. Now I do this is much more closely closely related to what I’ve been doing directly in my business for a very long time, which is SEO. So I’m very familiar with this. And I do and I’ve thought about this a lot for for years, actually. And much more. So obviously, recently, you know, he brings up you know, with a search engine, you can actually look at a result and go, Oh, this shit is done. You know, I think he had like a, you know, is it on turd gobbler, turn, turn cobbler 60 nine.com. It was a really funny joke, that you would obviously not click on that, and you wouldn’t trust it? Well, a chat might not be able to chat bot might not be able to tell that a little bit of a fair criticism, I do think that there are some things that chat will take over from search really quickly. And some things that will take longer, specifically, like how to find a best plumber or, you know, directions to to, to a restaurant. I know that chat GPD plugins are enhancing this. But I still think that there’s going to be a huge gap, at least initially. It’s unclear to me, I think that there’s what most likely scenario is that, which was not completely clear, because it’s all sorts of different things that can happen is that the chat bots will take a bite out of the search engine market, and that some things that you would use for search engines before, you will not you will use chat for now. But then there’s more commercial things like buying shoes, finding a plumber, finding a lawyer where a chat engine is not really very practical, at least not right now. I think it is possible that chat takes over a large amount of that if not all of it at some point, I think it’s unlikely that it takes over all of it. I think that the there’s just kind of a different use case. And there’s a different experience, that gives you a little bit wider range a little bit more control over the information you receive when you do a search versus just relying on a chat to give you the right answers. Now maybe that solution is like being tries to give it sources maybe that you combine the two. I’m really not sure exactly what’s what’s going to happen here. But you know, he does bring up a good point that like he’s just just immediately moving to just do all your searching with chat is pretty dumb. Here is one of the more philosophical questions that I had a deep discussion with somebody else who has a PhD in computer science and is kind of in this world to some degree. He, you know, we talked about how like, and this is this is a this is a correct point where I think he makes the wrong conclusion. He says basically, that these chats that, you know, the large language models are just imitative, they’re basically just regurgitating back to us what we gave them. And to some degree, that’s true, it doesn’t really understand as far as we know, it doesn’t really understand what’s going on doesn’t really process the words, it’s just trying to do the autocomplete essentially, is just a really fancy autocomplete, and to some degree, that’s true. But at the same time, at some point, it doesn’t matter. For example, GPT, four does pass Turing tests. And at some point, we also have to consider like, is this kind of what humans do already? You think about how a baby starts out and how they start imitating and things than that, you know, are they are they really human? Are they really sentient? When they’re just kind of regurgitating back what they hear, and they don’t even really know what the what the words mean. And so, you know, to dismiss it completely, just because it’s imitative. Or at least at its core, it’s just kind of regurgitating your word combinations of words that expects to be next is I think, incorrect. Again, it’s correct that it is imitating and that it’s not really it’s not really like thinking in the traditional sense, at least not on the surface. There are some things in behind in the background that they’re not really sure what’s happening. And, you know, it would also want to point out that like, he makes a point where like, oh, you can’t really make unique things. That’s just not true. You know, we already know that this is not true because there’s AI working on You know, potential protein folding to solve, you know, all sorts of diseases that that humans just couldn’t do, just because the sheer scale of possibilities there. And so coming up with new ideas, AI can actually do that. So the, you know, to say that AI can’t come up with new ideas is incorrect already. Here’s an important point that I think that is valid. And something that is we’re going to need to deal with probably assuming that AI is as powerful and an all encompassing as I suspect. He talks about, hey, it’s using public data for like art brings up artists We’re suing are suing AI companies. And because they’re using their data, my thought I have kind of two, two, there’s kind of two conflicting thoughts here, which is that I do think that like, if you go to an art museum, and you look at a bunch of Monet’s, and then you go home, and you paint something that is in the style of Monet, but it’s not a Monet, exactly, is that something you should be suited for, and my origami we definitely not. Now, at the other on the other hand hand, the AI dataset that, let’s say open is using is based upon all the humans that have ever existed. So they’re taking the work of all the humans. And most importantly, I do think that there’s, like 10 years from now there’ll be massive, massive job losses. Because of this, I don’t know exactly how much but even just 10% would be huge. In some cases, it won’t, they won’t like I think in the medical field, a lot of times it won’t be job losses, at least not for a while, because it’ll just be we just don’t have enough. There’s already a supply problem. But so I think that AI companies that are making a bunch of money from aI have an obligation to make good. And I don’t know exactly what that looks like, my first thought is, especially if we look really far in the future, let’s say 90% of the jobs are eliminated. I think there probably should be a pretty big tax on AI companies, and probably some sort of universal basic income. Okay, I think that’s correct first, now, right now, and that doesn’t really make as much sense. Now, I think right now, there also needs to be a lot of work not just on AI safety, well, maybe, but like kind of fighting the negative AI out there, the fighting the spam, fighting the scammers fighting, you know, fake pictures of politicians, or deep fakes, the AI companies should be investing in these things. And I plan on doing that as well, I already have some plans, I’m trying to find, trying to build stuff like that, or, you know, make sure that if somebody else has built it, maybe promote that kind of stuff as well. So that people can, so that we can try to reduce the negative, because there’s a lot of benefits to AI, but there’s also going to be a lot of negative that we’re gonna have to deal with or work through. And, you know, social media is still already having negative and it’s still having negative impacts. So, you know, I don’t know what that looks like. And maybe we just can’t trust the big companies to do it. Because Facebook doesn’t seem to, or doesn’t seem to have taken any accountability for how Instagram is, is trashing on the psyches of young women. With its with the way it works. And this is this has been proven like this has been demonstrated heavily. And so, you know, hope, hopefully, we have more responsible companies, I’m not really trusting, you know, even open AI as trustworthy as I think they seem right now. Who knows, you know, maybe they change ownership, or whatever. And something could go very, very wrong or, you know, Microsoft with with Bill Gates, he does seem to have altruistic intentions. So that’s good. But who knows, maybe Facebook actually ends up winning the war of Google and somebody else takes over and they just like, like, Yeah, screw everybody. We’re just going to make gazillions of dollars and you guys can live in your you guys can have no jobs and we don’t care. So, quick summary of everything, kind of the AI vs vs. Ai reality.
There’s definitely a marketing BS. I can’t even you know, I’ve already seen a lot of it, where companies are just kind of slapping AI or like a lot of times, it’s sort of like, let’s just put chat GBT in our software and a place right here. And therefore we have AI, we’re an AI and I don’t think that’s really, very fair. But also, there’s, there’s a lot of real benefits that are going to come free from Ai. So to say that it’s just a mimicking thing, and that self driving cars will never come or artificial super intelligence is not possible. I don’t really think that that’s that’s very rational. Right now, I think there’s a lot of possibilities right now. And I don’t think anybody really knows exactly what’s going to happen here, which is kind of scary, but also exciting at the same time. And I want to distinguish between AI now what’s capable now what is capable in the future and also what it’s never capable of. I think it’s very hard to say AI is never going to be capable of anything. That’s just something that, hopefully, you know, if you’re feeling that, oh, it’s never gonna be able to do self driving cars. I think that’s not true. And there’s just really nothing. I’m trying to think of something that you could say it’s never going to be capable up. The only thing is maybe sentience. And even then I think it’s just arguable you’re just taking a guess as to whether or not and I think that’s probably not true. But there’s also some unique stuff that if you’re not into software development that you might not know, which is that like AI has enabled back end capabilities that we didn’t have before. For example, you know, this might seem kind of trivial, but it’s actually really impactful in a lot of ways, just summarizing text that we couldn’t really do before. That now you can do with AI. There’s a whole bunch of other things that I won’t go into. So there are a lot of benefits from Ai there is some AI reality but hopefully this video helps you cut through some of the AI marketing BS. Let me know what you think. Give a like to like this video. Subscribe for more awesome AI videos. Thanks and have a great day. Bye


AI Predictions Over the Next 12 Months — To April 2024

Artificial Intelligence development is moving FAST…

…It got me thinking about what we can expect in just 1 year from now.

The thing about technology development is that a technology advancement in one area often speeds up the advancements in other areas — which is why tech growth is exponential.

So, we could be seeing crazy stuff in just 12 months…

Watch the video for more details:

Approximate Transcript:

In this video is about AI over the next 12 months, what do I expect to see what seems likely, and I’m shooting this in April of 2023. So this would be April 2020. For that I would be making these by this time i, this is what I expect to see. So first of all, expect to see an explosion of narrow AI models. Why is this because there you can have faster development time, it’s less, less expensive to develop more data is not necessarily better, you reaching that threshold of data, when you’re when the narrow model is much more narrow is much, much, much easier. Also, in some cases, there’s like low error tolerance. So in the example I’ve given this in a lot of videos, like an AI surgeon, you definitely, that’s almost certainly going to be a specific model, maybe even at first AI models for a specific type of surgery. Or, and then it goes from there. Like for example, like gallbladder surgery, or just for gall bladders, right, and then just for stomach surgeries, or whatever. And then maybe like they they kind of generalize to different types, different areas. So maybe the chest area, the stomach area, the ankle or whatever. Again, these are, these are things that you do not want a little slip up, you don’t want to be calling the GPT four API and have it get a little bit creative with how it does this, at least with regards to some parts of it. And maybe what you do is you there’s certain parts of it that are you for your for your the narrow model. And then if GPT four is like the best general reasoning, if it runs into something that it doesn’t understand it calls to GP for for reasoning. And then the actions it takes are, are done with the different models. So a kind of pointing out another thing here, which is that just because it’s a narrow model, doesn’t mean it works by itself. Sometimes I think you need multiple models that and I think Tesla does this, where they have two different models. And if they don’t agree, then it doesn’t take the action. So they like they have to agree. And so this, this kind of points to several different neuro models that approach things from a different angle. I think, again, I’ve mentioned medical, a lot of times I think this is the primary area, the first area, we will see the most AI development because it’s just the most potential here, there’s just so many things, AI is really, really good with medical with testing out different, suggesting different drugs, because there’s just literally unlimited practical purposes, there’s unlimited amount of like combinations, two ways to put the molecules and form them and, and shape them and do their structure, that it’s not really possible for humans to take to take that job on themselves. And again, this is not an exhaustive list. Like I, I think that if I were to spend another five minutes, I could probably add another five, there’s just a lot. And then legal logistics, data analysis, math and physics, software development, all of these things as the price of an ease of developing a model, and setting it up on Nvidia has their new AI cloud where you can basically get the same CPU GPU type stuff that that open AI uses in terms of the processors, and start small, just like other cloud computing, that’s gonna really make it a lot easier to do some of this stuff. But there’s just, this is a huge predictor, I think that there’ll be just way, way more and some of them extremely, extremely useful, and actually pretty well formed to where they’re, they’re adding a ton of value to society. I think it’s pretty likely it’s if it’s not out by in a year by April 2024. Then, you know, it’ll be out soon. That’s what I would expect, you know, that the time it took from GPT, three to GPT. Four, wow, I just looked it up. I thought it was faster than that. It’s actually it was actually almost almost three years. So more like two years and nine months. So that’s quite a bit of time. Some people have heard, say, GPT. Five, they do seem to be speeding this up open AI. Maybe it’s more like late 2024, maybe even all the way into 2025. It’s hard to say. But I do I do expect that with their success, especially with chat GPT that, that they’re going to be able to put more resources in it. They have more funding that you know, they’re going to get more customers, which should speed everything up and just put things on people’s radar. So maybe maybe if we’re lucky a year from now, we’ll have GPT five it’s I think it’s maybe a coin flip. Maybe it’s even less likely that a year from now, but it’s still is not our Mo possibility. And if it’s out, you know, it’s it scores better than 99% plus people on like all testing, there’s a large context window. We calling this mega modal model, not multi modal model, just because maybe it takes on everything. It’s hard to it’s hard to kind of fathom how good its logic and reasoning would be. Just because it’s really, really good, right, right now, I think we’re gonna run into, we’re gonna run into a lot more. And this is maybe this is actually maybe a separate point from all from it, maybe I should have pulled this out. But it’s going to be really hard, almost impossible to tell online. If some if somebody is like a bot or not, unless you’ve like already met them in person even and then they still could be using AI or it could be not actually that person. So something to think about, will it be AGI I would say it’s very unlikely not to be AGI. And the reason for that is that I think that I think there’s something else that is needed beyond a large language model. Let me give you an example. So the large language model doesn’t have a memory, really, so to speak. And so there’s pieces like that, that I think will be needed to for something to be considered an AGI. But it’ll certainly pass a crapload of tests. And maybe that is something that they add in, you know, there’s no reason they can’t add it in or maybe with like a certain plug in. Or if you connect GPT, five to another piece of software that gives it the memory. That makes sense, then it could actually give us it gives it autonomy or something I don’t know, I’d say it’s unlikely, but it’s going to feel a lot like it or like it’s going to feel like we’re really close. Maybe mid journey version six, they seem to be going this up, you’re going really fast here. So even a year ago, there were two and today they’re at five. So maybe we’re actually looking at version six to eight, something that maybe maybe six to 10. Right now, this is probably the best AI just kind of general art, maybe there’s some specific models that will come up for very narrow specifics. This is when I put in into mid year and this is one of its outputs when I put it just like mid journey. And because it’s already amazing photorealistic I think this goes back to kind of the bottom knot, which is that there’s going to be a reality slippage. Recently, there was the photo that went viral of the of the Pope and a puffy coat that was fake. Expect a whole lot more of that. I mean, it’s going to be really, really, it’s kind of crazy, because it’s super, super good. Right now, it’s hard to kind of fathom how good it will be right now. You know, and also I wonder about like companies that are people that are dependent on stock photos for their for their income, I feel bad for them, because I just don’t know why I would buy stock photos, if I can make as many as I want very easily. Not that I was buying stock photos in the first place. But some people do, certainly. And really, you know, there’s just gonna be like a billion new AI tools, so it’s just gonna be so many people. So when ChaCha beauty came out it put a lot of it put it on a lot of people’s radars more it really said, Oh my god, we’re here. And while chat GPT was not perfect, it was still amazing. And GPT four is kind of nuts. And you know, they’re already they’re also open AI is not just releasing like that. But like GBD for but you know, the whisper, it’s going to get better the their dolly is gonna get better. Whisper I think right now is just text camera, if it’s audio to text or text to audio, but it only goes one way. I think they’ll go two ways. And people don’t don’t quote me on that. But also, people using open AI is AI API connecting into their software to create all sorts of different awesome tools that are GPT for powered. That
is really awesome. You know, a lot of what’s going on right now is people just kind of doing the same thing in different places. And I don’t think that’s very, and sometimes it’s useful, but in general, it’s not very useful. So they’re just like, you know, chat GBT in the web browser. Whoo. Okay, so I don’t have to like, go over here, I just go over here. And it’s not really it’s not really adding a lot of value. But there’ll be expected to be some people who do a little bit more creative things. Hopefully, that’s me, that creates a lot of value, and actual uniqueness from these unique things that come up. There’ll be new large language models, new companies that try to go try to compete with GPT for but also, as I pointed out before, you know, a large language model maybe for doctors or something, or there’s all sorts of different for specific large language models. I expect a lot of these new text to image maybe we could just say text to video because right now, there is a little bit of this, but it’s pretty bad. I expect that this will actually be very different and much better a year from now. Audio, we already do this pretty well. But you know, like with a voice cloning. It’s not quite perfect, it’s really, really close, maybe the voice cloning will perfect voice cloning. That’s kind of crazy. So I’m going to tell us, again, this is part of the reality slippage, which is that it’s going to be really hard to tell. I mean, you could have a deep fake, of politician saying something that looks and sounds exactly like that person. And, but it’s not real, this is going to happen. And it’s actually this is probably one of the biggest downsides of AI in general. If you look at the effects of social media on things, social media did some really nice things for society. But in general, there’s a really dark side, which is that it allows for spreading of misinformation much easier than before. You know, you know, when internet first came out, everybody gets information. It’s democratizing the information. Well, and to some degree that happened, we had like the Arab Spring, which was nice, but when bad actors figured out how to manipulate the people, they went up, they were hit that stuff hard. And there’s a lot of that going on right now. And unfortunately, it’s going to get worse, before it gets better. Probably, hopefully, somebody can come up with tools to fight these reality slippage to be to be able to identify voice cloning, or fake or AI generated photos via AI. So it’s gonna be aI protecting us against AI, it’s definitely happening going to happen. At least people should try that. And we’ll try that. And I expect that there’ll be some amount of success for some things, maybe some things that are, it’s not, it’s not so easy to do that. You will also have, like, integrated vs. Paschal tools, so a lot of old tools that you’re familiar with will try to integrate AI into them. And by integrate, you know, I mean, actually kind of mix in with the whole feature set and deep inside the software, or, or whatever product it is, versus like just patching it on the do see a lot of patching on go get chat GBT in our thing, or, you know, it’ll, it’ll write in an email for you, it’s, I don’t find that to be very useful. And I expect a lot of that to continue to go on. So a lot of tools will just, they’ll just do that. And that’s it. Leave it at that. And they’ll be like, we’re AI powered, when they’re really it’s really not, it’s just kind of like adding a patch on to something old. What are the odds of AGI? So first of all, we have to really, this is this is a fuzzy definition, different people, you can ask the different people in the group to 10, slightly different answers. My thought on this would be kind of an autonomous. I think somebody else called it, David Shapiro called it an ace, autonomous cognitive entity. I think there’s a pretty good definition, a little bit more clear that I think, artificial general intelligence. So this is an entity that is autonomous, and intelligent, and can act like it’s not necessarily that it is a sentient being, but can act like a sentient being. And that, you know, one of the things that I mentioned before that GPT, four GB five, don’t have, or at least that I’m aware of, is a long term memory to where they are, and then also sort of like a long term context. Maybe Maybe they do have a little bit of long term context, if they’re trained on so much data, but they don’t have a long term memory in terms of like what they’ve actually experienced, which is, I think, pretty critical to creating the ace. And there’s several other pieces to this puzzle, I think. So my odds would be low ish. It’s not zero. And then also, I think, maybe if we get like a really fast GPT, five, or maybe a GPT 4.5. Plus somebody else plugs it into some of the other pieces. Maybe they plug it into, maybe there’s like a GPT, let’s say GPT 4.5 plus x plus y. So there’s two more pieces, somebody else takes the GPT 4.5 API, plugs it in and actually creates something that could be considered an ace. This is very possible. So or maybe it’s GPT. Maybe it’s open AI that does this. So let me know what you think. Let me know what you think will happen over the next year. Please leave a comment. Like if you’d like to subscribe if you want some more like this. Thank you very much and have a great day. Bye.