Blog

Unveiling Google PaLM: Revolutionizing AI with Multilingualism

What is the use of Google PaLM?

Google’s PaLM 2 is an innovative language model which is set to revolutionize the AI domain. Its ability to equip all Google products with next-level capabilities including Gmail, Google Docs, and Bard makes it unrivaled. This state-of-the-art model has been structured on the same lines as GPT-4 and possesses commendable chatbot powers along with image analysis, coding, and translation abilities. PaLM 2 shines through its multilingualism, which will bring Bard to over 40 different languages previously unavailable. With textual training data from over 100 languages under its belt, PaLM 2 has also aced advanced language proficiency exams, truly mastering the art of linguistics.

Moreover, this language model boasts professional programming expertise in up to twenty noteworthy languages such as Java, Python, Ruby, and C by utilizing publicly accessible source code datasets for effective skill acquisition. These matchless features are set to redefine what was once thought of as impossible and reposition the boundaries of what can be achieved in the future of AI technology.

Is PaLM available to the public?

Yes, On the 10th of May, 2023, Google made a significant announcement at the I/O developer conference. They revealed the unveiling of their newest large language model (LLM) called PaLM 2. The tech giant stated that this innovative language model would power Google’s updated Bard chat tool, which was engineered to outcompete OpenAI’s ChatGPT. Moreover, PaLM 2 will serve as a foundational model for most of the new artificial intelligence features that Google is introducing today.

The good news is that the much-anticipated PaLM 2 is now readily available to developers via Google’s advanced PaLM API, Firebase, and Colab. However, just like OpenAI, technical details concerning next-gen software remain a mystery to all except those authorized by internal security protocols.

It’s worth noting that PaLM 2’s predecessor had over half a billion parameter models embedded within it. But with no disclosure of such information regarding its successor from Google or OpenAI, rumors and speculation have been rife in recent months regarding possible advancements in both frontiers.

Nevertheless, what has emerged crystal clear from this revelation about PaLM 2 is conclusive evidence of how serious tech companies are inclined toward deep learning models’ developmental progress. With massive infrastructure upgrades like TPU v4s from Google cited as powering this machine learning feat now available publicly; the public can rest assured that further breakthroughs are imminent in an ever-evolving world driven by cutting-edge technology.

How do I access PaLM 2?

To successfully access PaLM 2, all that is required of you is:

  • Personal Google account (not business-related)
  • And Google Bard.

Once granted access through Google Bard, your possibilities with this AI platform become almost infinite. PaLM 2 via Google Bard, enables users to generate natural language responses that correspond directly with the questions or queries submitted by the user typically referred to as prompts. All one must do is input their set of instructions or questions into PaLM 2 and then it will give an artificial intelligence compounded response.

Is Bard using PaLM 2?

It’s been confirmed that Bard is indeed utilizing PaLM 2. This powerful tool has the potential to completely revolutionize the way we interact with computers. While it’s still in the development stages, PaLM 2 has already demonstrated an impressive range of capabilities that were previously believed to be impossible for machines.

As Bard continues to master this advanced technology, we can expect to see even more remarkable feats accomplished. The possibilities are truly boundless when it comes to PaLM 2’s future potential, and we can’t wait to see what it will be capable of in the years to come. This groundbreaking tool represents a major leap forward in computing capabilities, and its impact on the industry is sure to be nothing short of profound.

Is PaLM better than GPT-4?

The technological advancements of GPT-4 and PaLM 2 have revolutionized the world we live in. These AI tools, although distinct in their functionalities, have become significant contributors to the evolution of language processing and information management.

GPT-4, touted as the backbone of information, data, and language on large-scale and mid-scale applications, has emerged as a powerful tool for handling complex data sets. Its analytical capabilities are unmatched when it comes to processing vast amounts of data with accuracy and speed. It provides a seamless integration between natural language processing and machine learning algorithms that make decision-making more efficient.

On the other hand, PaLM 2 has focused its efforts on championing portable and small-scale usages with flexibility and application development. This AI system excels in navigating limited datasets with precise outcomes. Whether it’s text analysis or semantic understanding, PaLM 2 delivers accurate results while remaining portable for easy access anytime, anywhere.

In conclusion, although GPT-4 is ideal for large enterprises requiring extensive data management capabilities with speed and precision; PaLM 2 targets portability with specific outcomes suitable for smaller-scale projects requiring accuracy when handling limited datasets. It’s imperative to recognize the unique strengths both AI systems offer concerning computer engineering regarding big data management or as an alternative aid for students conducting academic research via their devices.

Does PaLM have an API?

Yes, The PaLM 2 model developed by Google, has an Application Programming Interface (API) for those who venture into the exciting field of artificial intelligence. The PaLM API offers its users the opportunity to access and harness the power of this cutting-edge tool through various generative AI applications that can be utilized in different contexts such as content generation, dialog agents, summarization, classification, and more.

No longer will you have to rely on mere human capabilities to perform mundane tasks such as text summarization or even classifying data sets. With the aid of PaLM’s robust abilities and user-friendly API, you can now elevate your performance by building ground-breaking applications that accurately imitate human-like responses with maximum efficiency – all sourced from this large language model.


BERT: Google’s Breakthrough in Language Understanding

What is Google BERT?

Google BERT is a cutting-edge AI language model that is now being applied to search results to provide a more contextually accurate result. This complex model gives Google the power to analyze every word in a search query concerning all of the other words and helps it to better understand the context around the search. Previously, Google would only process words one-by-one in order, but BERT takes the whole sentence into account, including prepositions and context clues that can change the search’s meaning. This natural language processing (NLP) and natural language understanding (NLU) allows for search results to be catered around the context of the search phrase. Google estimates that BERT will have an effect on about 10% of American searches and that the language model within it can also understand languages other than English, meaning its effect will only become wider. As many now search for information using natural language, BERT provides a more complete understanding to produce the desired results.

Is BERT created by Google?

In 2018, Google made a breakthrough in natural language understanding with the introduction of BERT. Its unique characteristics such as tackling challenges such as ambiguity, semantic role labeling, context analysis, and more, have enabled it to rival human “common sense” understanding. In October 2019, Google announced that it would deploy BERT to its production search algorithms for the United States. BERT distinguished itself from word2vec and GloVe, as it understands context and polysemous words. This breakthrough enables the algorithm to correctly interpret words with multiple meanings. Moreover, it is capable of sensing the most challenging of obstacles, ambiguity. Natural language comprehension scientists express the complexity of the task as being the foremost challenge. Given the tremendous successes of BERT, it was no surprise that Google would apply the framework to their search algorithms. Utilizing this technology will certainly add invaluable information on what users are after when making inquiries. It remains to be seen whether or not the BERT-powered algorithms will fulfill the high expectations Google has for the technology.

What is BERT used for?

  • Bert is a powerful language processing tool that provides wide capabilities to solve many language tasks:

Sentiment Analysis

  • Sentiment Analysis can be used to assess how positive or negative reviews are.

Question Answering

  • Question Answering aids chatbots in responding to user queries.

Text Prediction

  • Gmail has used the Text Prediction feature to accurately suggest the next text.

Text Generation

  • Text Generation can craft articles from just a few sentences.

Summarization

  • Summarization helps contract documents take up less space.

Polysemy resolution

  • Polysemy resolution allows for differentiating words with multiple definitions based on context.

What data is BERT trained on?

The BERT framework underwent extensive training on vast textual sources such as Wikipedia, which contained approximately 2.5 billion words, and the Google Books Corpus, comprising around 800 million words. These expansive and informative datasets equipped BERT with profound knowledge of not only the English language but also our world’s intricacies. The training process for BERT was a time-consuming endeavor that became feasible due to the revolutionary Transformer architecture and the utilization of Tensor Processing Units (TPUs). These TPUs are custom circuits devised by Google explicitly for handling large-scale ML models. Employing approximately 64 TPUs, BERT’s training duration amounted to roughly four days, a remarkable achievement given the immense scale of the task. Initially, Google introduced two versions of BERT: BERTlarge and the comparatively smaller BERTbase. While BERTbase exhibited slightly lower accuracy, it remained on par with other cutting-edge models in terms of performance. This ensured that BERT, in its various iterations, represented a significant advancement in language understanding and processing capabilities.

Why do we need BERT?

Effective language representation plays a pivotal role in enabling machines to comprehend language comprehensively. Traditional models like word2vec or GloVe produce a solitary word embedding for each word in their vocabulary, disregarding contextual nuances. This means that a word like “Rock” would have the same representation in both “Rock Music” and “River Rock.” Conversely, contextual models generate word representations based on the surrounding words in a sentence. BERT, as a contextual model, adeptly captures these intricate relationships in a bidirectional manner. BERT’s development draws inspiration from a range of pre-training techniques, such as Semi-supervised Sequence Learning, Generative Pre-Training, ELMo, the OpenAI Transformer, ULMFit, and the Transformer. While these models are primarily unidirectional or shallowly bidirectional, BERT stands out as a fully bidirectional framework. By employing diverse methodologies and innovative ideas, BERT emerges as a cutting-edge approach to achieving comprehensive language understanding. Its contextual nature allows for a more nuanced and accurate representation of words, enhancing the overall performance and versatility of language processing models.

Disadvantages of BERT

The size of BERT, while advantageous in enhancing prediction and learning capabilities through extensive training on large datasets, also brings forth certain limitations. These drawbacks are inherently tied to its scale and encompass the following:

Large Size

  • Primarily, the model’s large size can be attributed to its training structure and the corpus used for training.

Slow Training Process

  • Due to its magnitude and numerous weights that require updating, BERT’s training process is notably slow.

Implementation Cost

  • Moreover, the sheer size of BERT necessitates substantial computational resources, resulting in higher costs for implementation.

It is essential to note that BERT is specifically designed as an input for other systems rather than a standalone program. Consequently, fine-tuning becomes imperative for downstream tasks, which can be a meticulous and intricate process.

In summary, the drawbacks associated with BERT primarily stem from its considerable size. Despite the manifold benefits it offers, such as improved prediction and learning capabilities, the trade-offs include slower training speed, higher computational requirements, increased costs, and the need for meticulous fine-tuning when integrated into other systems. Awareness of these limitations is crucial in determining the optimal implementation of BERT for specific use cases.


How to Use ChatGPT to Be more Productive

How does ChatGPT improve productivity?

ChatGPT improves your productivity by increasing your capabilities at certain stages of the work process.

Specifically, the brainstorming and planning parts of your work can be significantly enhanced by ChatGPT (GPT-4).

How Do You Leverage ChatGPT for Productivity?

One of the great things about using ChatGPT to be more productive is that you can start small and simple.

We recommend starting by asking ChatGPT for ideas for good prompts to put in about a single goal or objective you have.  Explain the situation as if you were explaining it to a consultant.  For example, give as much detail as you think is relevant.  Putting in 5 paragraphs about your objective will be helpful in providing better and more relevant outputs from the AI models.

To put things more broadly, ChatGPT (using GPT-4) is excellent at brainstorming and planning.

What can you use ChatGPT To-Do?

ChatGPT can do an incredibly wide variety of things for you.

It tends to be more relevant for knowledge work and white collar jobs…it even can be amazing at doing high skilled work like software development, data analysis, and healthcare.

Here are some more examples of what you can do with it:

  1. Answer Questions: ChatGPT is able to answer a wide range of questions on various topics based on its pre-existing knowledge base.
  2. Compose and Edit Text: You can use ChatGPT to draft emails, write essays, create stories, generate ideas for content, and even check grammar.
  3. Learning and Education: It can assist with explaining complex concepts, providing examples, and answering questions in various fields, thereby supporting your learning process.
  4. Brainstorming: ChatGPT can help generate ideas for projects, solve problems creatively, or provide alternative perspectives.
  5. Language Learning: It can help you practice foreign languages by providing translations and facilitating conversation practice.
  6. Programming Help: ChatGPT can assist in generating code snippets, explaining coding concepts, and even troubleshooting simple errors.
  7. Advice and Recommendations: It can provide advice on a variety of topics, such as productivity tips, travel recommendations, book suggestions, and more.
  8. Role-playing Scenarios: You can use ChatGPT for role-playing exercises, such as practicing a speech, a job interview, or a sales pitch.
  9. Wellness and Mindfulness: It can provide guided meditation scripts, relaxation techniques, and general wellness advice.

Remember, while ChatGPT can handle a wide range of topics, it has limitations. It doesn’t know personal data unless shared in the course of the conversation (and it’s advised to not share sensitive personal information), it can’t access or retrieve personal data from databases or the internet (actually, it can now do so via some of the new ChatGPT plugins), and its responses are generated based on a mixture of licensed data, data created by human trainers, and publicly available data. As of its last update in September 2021, it doesn’t have the ability to learn or remember new information from interactions.

https://www.youtube.com/watch?v=w-3CHq28gHs

What are the Best Tips to Increase Productivity?

Our biggest tip for using ChatGPT to improve productivity is to identify tasks you have to do where you think ChatGPT could help.  We think you should identify these tasks by looking at which tasks can help with brainstorming, planning, or text generation.

Here are some more ChatGPT Productivity Tips:

  1. Set Specific Goals: Setting clear and achievable goals can provide a road map for your tasks. Use SMART goals – Specific, Measurable, Achievable, Relevant, and Time-bound.
  2. Prioritize Tasks: Not all tasks are of equal importance. Prioritize your tasks based on their relevance and importance. The Eisenhower Box is a good tool for this.
  3. Break Big Tasks into Smaller Ones: Large tasks can seem daunting and hard to start. Break them into smaller, manageable parts to reduce overwhelm.
  4. Time Management: Use time management techniques like the Pomodoro technique (25 minutes of focused work, 5-minute break) to ensure you stay focused and avoid burnout.
  5. Eliminate Distractions: Identify what commonly distracts you and find ways to minimize these distractions when you’re working.
  6. Use Tools and Apps: There are many tools and apps designed to increase productivity, like task management apps, note-taking apps, and apps that block distracting websites.
  7. Delegate: If you’re in a position to do so, delegate tasks to others. You can’t do everything yourself, and it’s important to let go of the need to control every task.
  8. Maintain a Healthy Lifestyle: Regular exercise, a healthy diet, and plenty of sleep can significantly impact your energy levels and ability to focus.
  9. Take Breaks: Working non-stop doesn’t equate to high productivity. Taking short breaks can refresh your mind and keep you energized.
  10. Practice Mindfulness and Meditation: These practices can improve focus, creativity, and stress management, leading to increased productivity.
  11. Continuous Learning: Always look for ways to improve your skills and knowledge. This can make you more efficient and open up new ways to get work done.
  12. Establish a Routine: Routines can help us build positive habits that can lead to increased productivity.
  13. Learn to Say No: Overcommitting can lead to poor performance and stress. Learn to say no to tasks and activities that are not important or do not align with your goals.
  14. Stay Organized: Keep your workspace and your tasks organized. This can prevent time wasted looking for resources or deciding what to work on.
  15. Use Positive Reinforcement: Reward yourself when you complete tasks. This can be as simple as taking a short break, having a snack, or doing something you enjoy.

Remember, different techniques work for different people. The key to increasing productivity is to try out different methods and find what works best for you.

Does ChatGPT Improve Productivity?

Yes, ChatGPT can indeed be a tool to improve productivity in various ways:

  1. Information and Research: ChatGPT can provide information on a wide range of topics quickly, helping you reduce the time spent on online searches.
  2. Task Management: It can assist you with setting reminders, creating to-do lists, or even providing strategies for task management.
  3. Drafting and Editing: It can assist in drafting emails, reports, or other types of written content. It can also help review text for grammar or style issues.
  4. Brainstorming: ChatGPT can help generate ideas or provide alternative perspectives on problems, which can be useful in brainstorming sessions.
  5. Learning and Education: It can assist with explaining complex concepts in simple language, providing examples, and answering questions in various fields, thereby supporting your learning process.
  6. Practice and Role-play: You can practice presentations, negotiations, or other scenarios with ChatGPT as your partner.
  7. Mental Well-being: ChatGPT can provide stress-relief techniques, meditation guidance, and general advice for maintaining mental wellness, which indirectly contributes to productivity.

It’s important to note, however, that while ChatGPT can assist in these areas, it is a tool, and its effectiveness largely depends on how you use it. Just like any other tool, it works best when used appropriately and in conjunction with good productivity habits and techniques.

Can ChatGPT Help with Sales?

Yes, ChatGPT can indeed be a valuable tool in the sales process.

Here are some ideas:

  1. Lead Generation: ChatGPT can help with ideas and strategies for finding and attracting potential clients.
  2. Email Drafting: It can assist in drafting and refining sales emails or messages, helping to create clear, compelling communication.
  3. Sales Scripts: It can assist in creating or improving sales scripts, incorporating best practices and persuasive techniques to help increase success rates.
  4. Handling Objections: ChatGPT can offer advice on how to respond to common objections, providing salespeople with well-crafted responses.
  5. Sales Training: It can be used for role-play scenarios to practice sales conversations and techniques, which can be particularly helpful for new salespeople.
  6. Market Research: ChatGPT can provide information on effective market research methods and can synthesize simple data to support market understanding.
  7. CRM Notes: It can help with the creation of CRM notes, summaries, and updates after client interactions.
  8. Strategy Development: You can use it to brainstorm sales strategies or discuss ways to improve sales performance.

Remember, while ChatGPT is a powerful tool, it’s not a substitute for human judgment, especially when it comes to understanding and responding to the unique needs and context of individual customers. Always use it as a tool to augment your sales process, not to replace the human touch that’s essential in sales.


Whisper — Open AI’s speech to text AI Model

What is OpenAI Whisper?

Whisper, the prodigious automatic speech recognition (ASR) wizard, is flexing its muscles in the arena of speech processing.

With the wind of a staggering 680,000 hours of global, multitask web data in its sails, it is remarkably resistant to the hurdles of accents, background hubbub, and the quirks of tech jargon.

Beyond its transcription expertise in a multilingual context, it has a knack for translating from an array of languages into English. A grand giveaway, we’re rolling out our models and inference code to spark inspiration, stimulate practical implementations, and fuel ongoing speech processing research.

How does OpenAI Whisper work?

Painting the Whisper Masterpiece

Whisper’s blueprint is uncomplicated yet effective, embracing the famed encoder-decoder Transformer design. It all starts with an audio clip, divided into 30-second fragments, which are then turned into a log-Mel spectrogram. The encoder seizes these fragments while a decoder is on a mission to divine the corresponding text caption. Our star model is guided by unique tokens, choreographing tasks such as identifying languages, establishing phrase-level timestamps, transcribing multilingual speech, and performing to-English speech translation.

The Training Method Behind the Maestro

Contrary to many existing strategies, Whisper doesn’t limit itself to smaller, tightly-coupled audio-text training datasets, nor does it adhere to the trend of unsupervised audio pretraining. Having been trained on a generous and diverse dataset without any specific fine-tuning, Whisper doesn’t top the charts on LibriSpeech performance, a renowned speech recognition benchmark. Nevertheless, its adaptability shines when it’s thrown into the wild, demonstrating versatility across numerous diverse datasets and slashing errors by half compared to the conventional models.

Whisper’s Melting Pot of Languages

About one-third of Whisper’s training diet consists of non-English audio, providing it with a rich multilingual banquet. It is alternatively tasked with transcribing in the original language or translating into English. This approach has proven especially potent for learning speech-to-text translation, outstripping supervised state-of-the-art results on CoVoST2 to English translation tasks, and managing to achieve this without any prior task-specific training.

How do I Use OpenAI’s Whisper?

Currently, you need to use Whsiper through OpenAI’s API.

Click here to view OpenAI’s Whisper documentation to learn how to use the API.

How Much is Whisper OpenAI?

Whisper costs 6 cents per 10 minutes of transcriptions.

Is Whisper OpenAI Free?

No, whisper costs 6 cents per 10 minutes transcribed.

Is OpenAI Whisper Open Source?

Yes, Whisper is open source!

Did Whisper Get Shut Down?

No, it is still available via OpenAI’s API

Can Whisper AI do Text to Speech?

No, Whisper only does speech to text, not the other way around, text to speech.


AI Text Generators — Text to Text AI Large Language Models

How do AI Text Generators Work?

AI Copy writers, AKA AI Text Generators, work primarily the same way as Large Language Models that are text in and text out.

The services you can purchase to generate AI content often have some additional layers upon the core LLMs, so there cane be differences between 2 different services even if they are both using OpenAI’s API on the backend.

AI text generators work based on a machine learning architecture known as transformers, specifically the transformer decoder.

Here’s a simplified explanation of the process:

Training:

These models are trained on vast amounts of text data. They learn to predict the next word in a sequence of words. During training, they adjust millions of internal parameters to minimize the difference between their predictions and the actual words in the text.

Language Understanding:

As they train, they begin to pick up on patterns in the language, including grammar, punctuation, and even style and tone. More advanced models also begin to understand context, themes, and even some factual information.

Tokenization:

When you give the model a prompt, it first breaks down (tokenizes) the input into smaller pieces, which can be as short as one character or as long as one word (e.g., “ChatGPT” might be a single token).

Contextual Analysis:

The model then considers each token in the context of the ones that came before it. This is where the “transformer” part of the architecture comes in, allowing the model to consider multiple tokens simultaneously and understand how they relate to each other.

Output Generation:

The model generates an output one token at a time. After generating each token, it adds that token to the sequence and uses it as part of the context for generating the next token. This process continues until a maximum length is reached or a stop condition is met.

Fine-tuning:

After initial training, these models can be fine-tuned on specific tasks or styles of text. This involves additional training on a more specific dataset, which helps the model to better generate text in a specific style or better perform a specific task.

It’s important to note that these models do not “understand” text in the same way humans do. They don’t have beliefs, desires, or consciousness. They generate text based on patterns they’ve learned during training, but they don’t have a deep understanding of the content they’re generating.

Which AI Text Generator is the Best?

For a long time, the standard answer was Jasper AI, formerly known as Jarvis.

Nowadays, we don’t think the general consensus is still the same…

…we think it is now a bit more spread out into a larger variety of camps.

In our opinion, ChatGPT with GPT-4 is probably the best, but requires some skill and practice to use well, so it isn’t necessarily the best for all people in all situations.

Here’s a good video on this:

Is there a Free AI Text Generator?

Yes, ChatGPT and others are free.

What is the Top Free AI Writing Generator?

ChatGPT, in our opinion, is the best free AI writer.

What is the AI that can Write Text for You?

Soooo many AI models can write text for you.  ChatGPT, Jasper, and many others are able to do this quite well!

 


GPT-6, by OpenAI

What is GPT-6?

GPT-6 is a theoretical OpenAI Model proceeded by GPT-5.  It may or may not come out.

Will there be a GPT-6?

We think it is likely that GPT-6 will eventually come out, but it is also possible that OpenAI will go in a different direction and there will never be a GPT-6.

When will GPT-6 Come Out?

We estimate GPT-6 to be out in 2025 or 2026.

How Powerful will GPT-6 Be?

Umm, gigapowerful seems to be an understatement for how powerful we expect GPT-6 to be.

GPT-4 is impressively powerful.  GPT-5 seems likely to be crazy powerful.  GPT-6 has incredibly high expectations.

Will GPT-6 be AGI?

At this point, it seems likely that GPT-5 will be powerful enough to be considered AGI or be a major part in an AGI.

Therefore, I think we can assume that GPT-6 is even more likely to be so.

However, there is a consideration that GPT-WhateverNumber will NOT be an AGI because an AGI probably needs more pieces to it than a Large Language Model (LLM).

What we Can Expect with GPT-6?

Hopefully, no hallucinations.

Reasoning of a genius.

New capabilities that few, if any, people expected.

The sky is the limit.


How Long Does it Take to Install GPT4ALL?

Here’s the answer:

Approximate Transcript:

In this video is about how long it takes to install and set up and use and get using GPT. For All. For all this is supposed to be one of the easiest, if not the easiest, it’s just an uninstaller. Basically, I haven’t done any research other than you know, I think I watched a video on it a couple weeks ago. And I have these two lakes. So let’s see how long this takes to just get it going. And I’m I pause the video a few times, which is why I’m doing the stopwatch just for your convenience, but so that we can keep an honest time of okay, how long does this really take? Alright, let’s go. So I’m not maybe I should read this, but I’m just I’m not going to I mean, because it’s text on a computer that somebody wants me to read therefore I will not read it. There’s just too much I can’t be bothered with five whole sentences. Alright, so it’s installing it on my computer Yes, I definitely read the license
okay. Alright, so I installed it. Let’s see
okay, so hold on, I’m going to pause the video to pull up the folder. Didn’t see that it left a little shortcut on my desktop. Looks like that. So I spent 30 seconds looking for something available models. I don’t know anything about this or will download that to the groovy one. See how long that takes I’ll pause the video considering trying to click on this although it looks like it’s kind of grayed out so I was thinking maybe I could go through the options while this download here. Let’s just see what happens Nope. Okay, so that took about a minute it just got done downloading it and maybe a little bit longer. And to be clear, this is Windows 10 that um, the music on here I’ll pull up I was trying to pull up my computer settings but it didn’t work I already installed Okay, so now we click here. Let’s just see 10 And you’re done. Ninja cowboy the let’s just see how it got since he goes through it without going into any settings just at its base level. And this is a pretty nice computer but uh you know it’s not like the best computer in the world either. Okay, well that’s not very useful so let’s click on the settings here
remember how many threads My computer has my processor it’s a lot Okay, so it took us about three and a half minutes to get to here but I’m not sure how useful this is. But this right knee home like you about basketball like you are you’re just bliss
so very fast to install Okay, here we go. I’m not sure it is getting the like you are Yoda thing but it’s getting some of it was about to say I don’t know how useful it is because it didn’t this this first response is pretty bad. I’m an injury cowboy. How do I do it doesn’t really move forward. But this at least is writing a poem about basketball. Not really doing it Yoda styles but that’s that’s pretty tough one I think it’s also a little a little bit slow here on this, let’s just stop generating because I feel like it’s degenerated enough. I wonder if the any of these other models are better. If I can just click download and then let it do that in the background and then select it later. So is there a new conversation if I just want to start a fresh one? As summarized above, wonderful it’ll actually do that. I’m trying to see if it like reads up their temperature maximum length if it actually goes up so I mean, like start to finish with me spending 30 seconds looking for something that was right For my face about three and a half, four minutes to actually be up and running, which is pretty, pretty great. But trying to decide if it’s actually very useful now if it’s not useful it’s probably very close I think that these models these these models that you’re able to download and install on your computer or have gotten are getting much better very quickly and I feel like you know with the the ability to use GPT for to train this would be this can be grown very quickly to a level on par with GE GPD for it seems like technically that’s against their terms of service but if people are just doing this not to sell it just to give it away, then I don’t really see what they what they can do about it or how they can even prove it. I guess the way in which they train it it might there might be like oh clearly this person is doing a lot of training was shut them down. So it did not get this one. All right, well very easy to install questionable value. They will play with it in a different in a different video and show you kind of like what its pros and cons are. And let me know if you’re interested in that. Please leave a comment below. Let me know and do I do read all those comments and try to respond to most if not all of them. Thank you very much and have a good day.

 


Wolfram Alpha ChatGPT Plugin Demo

Wolfram Alpha is an incredibly useful plugin for ChatGPT.

Since many don’t yet have access to the plugins, I demo’d it in this video:

Approximate Transcript:

This video is a demo of the Wolfram Alpha chat GPT plugin, I’m gonna try to go through all of the things that it says it can do. In this video and kind of show also somebody pointed out that you could click on the thing to see kind of how it’s thinking and processing, I’m going to try to do that as well. There is a little bit of a limitation here, while I have access to all the plugins, there’s two that I don’t the the actual the open AI plugins, the code interpreter plugin would dramatically enhance Wolfram Alpha as we would be able to use use it on our own data on large datasets, like an upload a spreadsheet, or a CSV or something. We can’t do that. So we’re gonna have to kind of play with that a little bit. But let’s work through this here. I’m actually going to do this one at a time. So mathematical calculations, I’m going to ask it some funky calculation. Let’s see what it does. Okay, here’s my through else’s done integrals. So let’s see. Let’s see what it does. Alright, so make sure click on this solve integrate the input response? It’s I can’t really, because it’s it’s calculations or is it steps? The creatively solving for x by finding the roots of the polynomial equation resulting from the blah, blah. I like that it’s kind of showing its work to some degree, it’s telling us what it’s doing. Really Sure. Pause for a second. These routes are expressed, oh, x equals? It’s been a long time since I’ve done integrals. I don’t recognize it’s possible. This is a limitation of kind of a chat thing here. I don’t really recognize the answer. But it seems like it’s giving me multiple answers. Maybe there are multiple x values that solve this. And I don’t really understand that. So maybe somebody who knows a little more up to date with math notation can give that to me. So, but that’s still pretty neat that it can do that. Because, you know, my understanding is that some of these integrals can be extremely complex and hard to solve. Okay, let’s try this as well, data analysis. So obviously, we can’t do the heavy data analysis, because we can’t input that much data. I could probably copy and paste from a spreadsheet, but it’s unclear how much of that would do. We could, you know, I will try that second. But let’s also do visualization. So we’re going to do is graph, please, we always want to be polite to our robot overlords, please graph this equation. Let’s see what it does. Let’s scroll down. And let’s make sure we can see what it’s doing while it’s thinking.
It created an image. And it’s going to show us that image, I think within here, the images look pretty good. Sometimes, I’ve had been able to get it to do even a 3d image, which is pretty neat, a 3d graph. Okay, did it not a bad job of that? Let’s try. Let’s let’s try doing creating a spreadsheet real quick. And then just copying and pasting it into here. Okay, so what I did was I downloaded the last five years of stock data for for Microsoft, let’s actually, let’s just do one of the the listings and let’s graph this. Let’s have a graphic and see if that works. Please graph the data. Let’s see if it was it might not be too much.
Curious if it’s going to put this in for every single input here. Oh, pause while it works through this. And I’ll show you the end result. It might be here a while because it looks like it’s inputting every line one by one. And this is five years worth of data. So you know, five times 365. You know, that’s what like 17 1800 different lines and it’s doing it very slowly. So give me a minute. We’re not sure exactly what happened here because it stopped showing its work here and it gave an error. Indeterminate string starting line comb wonder if it just didn’t like some of the data on 570. It’s just missing. Maybe it didn’t like that. It jumped dates for some reason, I guess I guess over the weekend, they did they just don’t. The stock market isn’t open. And maybe Wolfram Alpha didn’t like that. And let’s see what it said to create the group If Okay, so it only has a little bit of data, and it only created over a very short period of time. That’s interesting. It didn’t like it. I think this will be resolved quite a bit with the code interpreter plugin if I had that. So I’m not super concerned that it that it kind of barfed on the on the data. Well, it had a it had a break here. So it’s unclear why that would be an issue. Who knows why, what does this click on this cloud plugin? Okay, well, page it says page at the top page unavailable. Interesting. Okay, let’s go back up to the top and scientific computations okay, I wonder what the what we can do with this?
How about please calculate the next five dates when Mars will be closest to the earth. actually kind of curious about that
summarized it. Next five Mars opposition dates. I don’t know if that’s the correct term could not understand. Okay. Mars opposition date? So it didn’t understand that. So it’s digging a little bit deeper and trying to figure out what this is nearest opposition to Mars?
I wonder if this is correct. Let’s actually check this answer.
No, that’s it’s opposition dates. So 2020 to 2025 2047 2029 2031 2025 23 720. Okay, well, it looks like even though I don’t have the exact dates. Let’s see what seems about correct and wonder if they’ll give the exact dates? Oh, let me see if I can find the exact dates. This January 2025. Does seem to match up with January 2025. Right here? Well, there does seem to be a little bit of difference in the data. We have January 16 2025, and February 19. Slight differences. So it’s possible that there are disagreements in the CAC I don’t think so that should be a very precise calculation. So I’m kind of curious if this is correct, or that is correct, or if I adopt this, but if there’s some amount of disagreement among that. I do think it would be interesting if I especially if I were still doing like physics calculations, or chemistry or astronomy to actually compute these, I’m not really sure how to compute this exactly. So I don’t have really great ideas on what to do here. So I’m going to move on, I did think that it looks like it, maybe calculate those and didn’t actually just pull the data from somewhere else, which would be pretty useful, I think, in general, unit conversions. Okay, well, somebody told me that it was please convert it was 40 degrees Celsius, degrees Celsius into Fahrenheit. I don’t know how to spell that. I don’t remember how I’m sure that’s wrong
you know, this is something that chat GPT might be able to do. Really well. Fahrenheit. So that is correct. I wonder if I do. I just do it straight up like GPD 454 could do that. Let’s go back here. My spelling of Fahrenheit is very very wrong.
Okay, showed its work, not bad. So maybe not the best example, for conversions. I don’t really think that unit conversions is super useful, maybe in combination with other things just because there’s other things out there that do that very easily. Just go to Google and doodle can do that calculation involving dates times determined time zones and calculate durations between dates. Okay, so let’s have a calculate this. So please tell me how many years blah blah blah until the next leap day?
Oh, it’s actually doing it. Not quite exactly what I was looking for. But so it looks like there’s a Leap Day coming up in 2024. Let’s use Excel to calculate this actually. So we got this right here, we’re just going to do equals that minus that. So 311. Yep. Looks like it calculated correctly. All right, good job. Fact retrieval. This is interesting, famous people. Alright, let’s try something like, what is the population of Rome as of April 24 2023, this might actually involve some calculation. So I’m kind of curious how it’s gonna do this, instead of just pulling the data and might actually have to kind of extrapolate Rome population?
Let’s look up here. Rome is a city.
Hmm, it didn’t. I’m not sure how useful this is. Because it seems like you know, for old data, or for long standing factual data, GPT, four does just fine. And the Google search does just fine. Maybe integrating this into other things might be useful. I wouldn’t exactly say that. Either that or even the real time data, whether all right. Subject to the knowledge cut off. Okay, well, that’s not very helpful. How can you call this real time data? If it’s, you know, a year and a half old that? I don’t quite understand that I’m not gonna even test that because it’s just not accurate. Huh, I wonder if we could look at I got I got an idea. Let’s try this. Please graph the average global temperature by year from 1800 through 2021. Again, this would be this might be something that you would be able to do with the code interpreter plugin in combinations is actually two different requests in once it’s actually grabbing the historical data along with graphing it. So I’m curious to see how it does it. The code evaluation time out
all right. Let’s do this.
Oh, it’s still working? Oh, okay. I don’t need to do that
it kind of solved its own problem. That’s funny. That’s good. That’s that’s a really good time. Seems to have an issue with this. And wonder if we do let’s just do through 1950. See, if we make it a lot smaller if if it gives us a better answer.
Okay, it still seems to have trouble with that. So I’m gonna go ahead and move on. Let’s see. Language is the plug in to analyze language and text provide definition of words and performing linguistic analysis. I’m not sure how useful this is because I feel like GPT four is very strong at this. Very, very strong at this. So I don’t really see any, any like angle here that would be good for that. Unless it like combined data or math to some degree. Again, I don’t really see a way to test this. That would be separate from it’d be better than GPD forward anyway. If you have any ideas let me know and I’ll give it a try symbolic computations. Oh, this is inter First thing let me think of something. Alright, we’re gonna try this please solve for y in terms of x I altered this a little bit so that there’s a y and an X and let’s just see kind of what it does
seems to have interpreted it correctly. Okay, so there was an error and then it tried again it looked like it got it this time three possible expressions I did not do the calculation myself but hopefully those are correct if they are not please leave a comment below for what the correct one was I’m happy that it tried it. And I think that you know, the the ability to use math within the chat actually expands quite a bit on the possibilities of chap GPT because math is part of so much of our world and even just kind of simple math sometimes making sure that we get it exactly correct. Is very useful as it is actually are seemingly ironic on the surface but really not with you know how it works, but it’s a weakness of the large language models while GPT four did get that other one right and I think it is better than GPT three 3.5 at math this is bringing so hopefully this was insightful to you. Let me know if there’s anything else that I missed. I appreciate you still here if you liked this, like this and subscribe so that I can feed my kids. Have a great day. Bye


Prompt Engineering Examples

Text Summarization

Text summarization is a key aspect of natural language generation. It involves condensing larger bodies of text into shorter, more digestible summaries. For instance, if you’re curious about antibiotics, you could ask the model to explain it to you. The model might provide a detailed explanation, but if you want a more concise summary, you can instruct the model to condense the information into a single sentence.

[text to summarize copied from elsewhere]

Please summarize the above text:

Information Extraction

Language models are also adept at extracting specific information from a given text. For example, if you provide a paragraph and ask the model to identify a specific product mentioned, it can do so accurately.

[text to extract information from copied from elsewhere]

Please identify the key factors in the above report:

Question Answering

The model can also answer specific questions based on a given context. By providing a structured prompt that includes instructions, context, and a question, the model can generate a concise and accurate answer.

[context text]

Please use the above text to answer this question: [Question about above text].

Text Classification

The model can also classify text based on instructions provided in the prompt. For instance, it can classify a given text as neutral, negative, or positive. If you need the model to provide the label in a specific format, you can provide examples in the prompt to guide the model’s output.

[input text]

Please tell me if the above response was positive, neutral, or negative:

Conversation

You can also use the model to generate conversational responses. By providing a role for the model (e.g., an AI research assistant), and specifying the tone of the responses (e.g., technical and scientific), you can guide the model’s behavior in the conversation.

[sample email text]

Please suggest 3 different responses to the above email:

Code Generation

The model can generate code based on instructions provided in the prompt. For example, you can ask the model to write a simple program that greets the user, or a more complex MySQL query based on a given database schema.

Please write a simple program in Python that greets the user and asks how they are doing.

Reasoning

While current language models struggle with tasks that require reasoning, they can perform basic arithmetic tasks. However, for more complex reasoning tasks, you might need to provide more detailed instructions and examples in the prompt.

Riddle: What has to be broken before you can use it?

Can you solve the above riddle?

GPT-4’s response:

The answer to the riddle is “an egg”. An egg has to be broken before you can use it for cooking or eating.


Prompt Engineering Guide

Guide To Prompt Engineering Basics:

Here’s a detailed guide on how to interact with Large Language Models like ChatGPT, GPT-4, Bard, etc:

  1. Be Specific: The more specific you are with your questions or prompts, the better the model can generate a relevant response. For example, instead of asking “What’s the weather like?”, you could ask “What’s the weather like in New York City on May 14, 2023?”.
  2. Provide Context: If your question or statement refers to something mentioned earlier in the conversation, make sure to include enough context so the model can understand. For example, if you’re asking a follow-up question, briefly summarize what you’re referring to.
  3. Use Correct Syntax: Large language models like ChatGPT understand natural language, so you can write to them just like you would write to a human. However, using correct grammar, punctuation, and spelling can help the model understand your input better and generate a more accurate response.
  4. Ask for Clarification: If the model’s response is unclear or seems incorrect, don’t hesitate to ask for clarification or more information.
  5. Experiment with Different Approaches: If you’re not getting the response you want, try rephrasing your question or providing more information. Different inputs can lead to different outputs.
  6. Understand the Model’s Limitations: Remember that while large language models are powerful, they don’t know everything. They don’t have access to real-time or personal data unless it has been shared in the conversation. They also can’t perform tasks outside of generating text, like making phone calls or browsing the internet.
  7. Use Tools and Features: Many interfaces for large language models offer additional tools and features. For example, you can use system messages to set the behavior of the assistant, or use the “Wolfram” tool to access dynamic computation and curated data.
  8. Respect the Guidelines: Always follow the guidelines provided by the platform you’re using. This includes not using the model to generate inappropriate or harmful content.
  9. Give Feedback: If the platform allows it, providing feedback on the model’s responses can help improve its performance over time.
  10. Enjoy the Interaction: Lastly, have fun! Interacting with a large language model can be a unique and interesting experience. Enjoy the process of exploring what the model can do.

How Can I be Good at Prompt Engineering?

First of all, study the prompt engineering guide above.

2nd, you can ask GPT-4 this question (this is a great idea, ironically, haha)

Also, we suggest taking a course on prompt engineering.

What are The Basics of Prompt Engineering?

The basics are: be specific, give as much context as you can, and be intentional with the language you use.

Checkout Prompt Engineering Tools

Is prompt engineering hard?

Nah, it is just a new skill for, well, EVERYONE, and anything new can seem overwhelming as you first start to learn it.

Here are some Prompt Engineering Examples to get you started.

How much do AI prompt engineers make?

This varies greatly.  We’ve seen job posts of $300k+/yr, but I wouldn’t bank on this type of income were I you.

How to Make Money with Prompt Engineering?

There are basically unlimited ways to make money with prompt engineering because the applications of LLM’s like GPT-4 span across so many areas of human endeavors…

You can make money with Prompt Engineering by becoming better & faster at your job.

You can make money with Prompt Engineering by improving your business.

You can make money with by getting a prompt engineering job.

…ETC

One idea: