Great article, I told my friends these days ago: "I think I'm having an AI Anxiety problem.." I started to feel that there is nothing new to be invented (like Charles Holland Duell quoted). AI will solve all computational problems and eventually all software will become an Open AI API wrapper (chills.) Therefore there will be no value to be capture by "hand made algorithms".
Thank you for writing this Nathan! It's really valuable to take a step back and reflect. Interestingly, there is an AI Bill of Rights already out there, from the White House: https://www.whitehouse.gov/ostp/ai-bill-of-rights/
Nice article, good to see an insider's perspective. I'm an applied researcher who has worked across a number of industries. The safety issue, coupled with the accuracy issue is paramount for me. My concern is the rush for companies to implement and integrate AI solutions, when our best models still give surprisingly inaccurate results at times. Case in point, I've tried the experiment of having ChatGPT write some simple code in VBA. It took a surprising amount of promoting and correcting to get it to do what I wanted in the end. Additionally, when trying to port a simple macro from PC VBA to Mac, it confidently responded with more code for a PC. That eased some fears I had about being replaced by AI as a developer/researcher. I'm not sure that any model can easily replicated the innovative thinking needed to stay on the forefront of an industry. Still, I am one in the camp of evaluating carefully what the applications of AI are, and then looking to see what risks and unintended consequences might emerge as a result. A fascinating time for sure!
Thanks for this, Nathan. Definitely saw this at the Hugging Face event last week. Lots of excitement, lots of FOMO. It's okay to jump in and build because it's fun, but so many entrepreneurs I met want their OpenAI wrapper to be the next unicorn and are afraid of not being first. For my part, I keep telling them to keep calm and focus on real problems, since first mover advantages aren't as evident for many of the reasons you pointed out.
Despite things _feeling_ like they've changed (in terms of hype/attention), I think a lot of this stuff was kind of locked-in from the original GPT class of models bearing fruit from time&refinement, so while it is stressful, I find it helpful sometimes to step back and note that the underlying technology hasn't drastically changed, just that a bunch of people have realized how interesting it is
Agreed (even though I just started working with LLMs). The biggest challenge facing people short term seems to be the discourse, which always seems to be somewhat separated from the technological reality.
We're just getting going technologically, folks need to get on board with that.
Thanks for this. Process is important. Most folks are good at creating products, but the process folks build enduring systems. Tech runs on infrastructure and designing and building flexible, extensible frameworks underpin the evolution of technologies. It also means the designers define some of the guardrails of the system. Process doesn’t get the recognition product does, but it hold the secret to longevity, and hopefully ethics.
This article discusses the importance of understanding the AI systems that are increasingly being integrated into our daily lives. The author emphasizes that while these AI systems can be incredibly powerful and efficient, they are not infallible, and their mistakes can lead to negative consequences. To make responsible use of AI, it is crucial to understand their limitations and the biases that can be present in their training data.
The article highlights three main challenges that arise from using AI:
Misunderstandings - AIs can easily misunderstand or misinterpret situations due to their reliance on statistical patterns. When their training data is limited, AIs might provide incorrect or unhelpful answers.
Lack of explanation - AI systems are often considered "black boxes" because they do not provide clear explanations for their conclusions. This makes it difficult to understand their reasoning, especially when they make mistakes.
Bias - AIs are trained using large datasets, and these datasets can contain various biases. As a result, AI systems might perpetuate or even amplify these biases, leading to unfair or prejudiced outcomes.
To address these challenges, the article suggests that users should learn more about the AI systems they use, as well as the datasets they are trained on. By doing so, they can make more informed decisions about the trustworthiness and usefulness of AI-powered solutions. Additionally, increased collaboration between developers, researchers, and users can help identify and address biases, ensuring more equitable and robust AI systems for everyone.
Insightful post! I have described this effect as the "Technological Acceleration Anxiety" that is looming for the ever increasing pace of change that surpasses our ability to reason about the effects and make good rational judgement as to what paths we should take.
Burnout is not good, as it is people under stress that will ultimately contribute to poor decisions and this will compound on itself.
These effects also are not limited to only those working on AI projects, but to society at large as consumers of the technology directly or indirectly. Many of the potential societal effects still are not in common discourse as of yet, but I've elaborated quite a bit on that here and reinforce some of the topics you discuss in your article.
Great article, I told my friends these days ago: "I think I'm having an AI Anxiety problem.." I started to feel that there is nothing new to be invented (like Charles Holland Duell quoted). AI will solve all computational problems and eventually all software will become an Open AI API wrapper (chills.) Therefore there will be no value to be capture by "hand made algorithms".
Thank you for writing this Nathan! It's really valuable to take a step back and reflect. Interestingly, there is an AI Bill of Rights already out there, from the White House: https://www.whitehouse.gov/ostp/ai-bill-of-rights/
Yeah, this was part of the joke :) and how the White House didn't make clear if it's rights for AIs or rights for humans 👽
Nice article, good to see an insider's perspective. I'm an applied researcher who has worked across a number of industries. The safety issue, coupled with the accuracy issue is paramount for me. My concern is the rush for companies to implement and integrate AI solutions, when our best models still give surprisingly inaccurate results at times. Case in point, I've tried the experiment of having ChatGPT write some simple code in VBA. It took a surprising amount of promoting and correcting to get it to do what I wanted in the end. Additionally, when trying to port a simple macro from PC VBA to Mac, it confidently responded with more code for a PC. That eased some fears I had about being replaced by AI as a developer/researcher. I'm not sure that any model can easily replicated the innovative thinking needed to stay on the forefront of an industry. Still, I am one in the camp of evaluating carefully what the applications of AI are, and then looking to see what risks and unintended consequences might emerge as a result. A fascinating time for sure!
Thanks for this, Nathan. Definitely saw this at the Hugging Face event last week. Lots of excitement, lots of FOMO. It's okay to jump in and build because it's fun, but so many entrepreneurs I met want their OpenAI wrapper to be the next unicorn and are afraid of not being first. For my part, I keep telling them to keep calm and focus on real problems, since first mover advantages aren't as evident for many of the reasons you pointed out.
Subbed!
I absolutely needed this today. Excellent advice!
Love reading your takes on the state of things Nathan! That's it, that's the comment.
Despite things _feeling_ like they've changed (in terms of hype/attention), I think a lot of this stuff was kind of locked-in from the original GPT class of models bearing fruit from time&refinement, so while it is stressful, I find it helpful sometimes to step back and note that the underlying technology hasn't drastically changed, just that a bunch of people have realized how interesting it is
Agreed (even though I just started working with LLMs). The biggest challenge facing people short term seems to be the discourse, which always seems to be somewhat separated from the technological reality.
We're just getting going technologically, folks need to get on board with that.
what questions do you think could be helpful to ask my managers if my AI team is experiencing burnout?
re: “ask your manger and skip-manager some of the questions posed in this article”
Things about how the plans change if someone releases a similar model, if they know we’re feeling pressure, or similar things. Does that help?
Thanks for this. Process is important. Most folks are good at creating products, but the process folks build enduring systems. Tech runs on infrastructure and designing and building flexible, extensible frameworks underpin the evolution of technologies. It also means the designers define some of the guardrails of the system. Process doesn’t get the recognition product does, but it hold the secret to longevity, and hopefully ethics.
I think AI is moving faster than we can cool down and think about the implications
Summary: Title: Behind the curtain: AI
Author: Greg Kumparak
Date: April 5, 2023
Summary:
This article discusses the importance of understanding the AI systems that are increasingly being integrated into our daily lives. The author emphasizes that while these AI systems can be incredibly powerful and efficient, they are not infallible, and their mistakes can lead to negative consequences. To make responsible use of AI, it is crucial to understand their limitations and the biases that can be present in their training data.
The article highlights three main challenges that arise from using AI:
Misunderstandings - AIs can easily misunderstand or misinterpret situations due to their reliance on statistical patterns. When their training data is limited, AIs might provide incorrect or unhelpful answers.
Lack of explanation - AI systems are often considered "black boxes" because they do not provide clear explanations for their conclusions. This makes it difficult to understand their reasoning, especially when they make mistakes.
Bias - AIs are trained using large datasets, and these datasets can contain various biases. As a result, AI systems might perpetuate or even amplify these biases, leading to unfair or prejudiced outcomes.
To address these challenges, the article suggests that users should learn more about the AI systems they use, as well as the datasets they are trained on. By doing so, they can make more informed decisions about the trustworthiness and usefulness of AI-powered solutions. Additionally, increased collaboration between developers, researchers, and users can help identify and address biases, ensuring more equitable and robust AI systems for everyone.
Is this a chatgpt summary lol?
Having Congresswoman Eshoo going around threatening people who open source their work with national security inquiries must not help much, either.
Jeez, is this actually happening?
Insightful post! I have described this effect as the "Technological Acceleration Anxiety" that is looming for the ever increasing pace of change that surpasses our ability to reason about the effects and make good rational judgement as to what paths we should take.
Burnout is not good, as it is people under stress that will ultimately contribute to poor decisions and this will compound on itself.
These effects also are not limited to only those working on AI projects, but to society at large as consumers of the technology directly or indirectly. Many of the potential societal effects still are not in common discourse as of yet, but I've elaborated quite a bit on that here and reinforce some of the topics you discuss in your article.
https://dakara.substack.com/p/ai-and-the-end-to-all-things
On being process rather than outcome-driven - have you heard of Ought, or read their piece on this? https://ought.org/updates/2022-04-06-process
will take a look!
I think talking about it among coworkers and friends should be enough. People tend to care, but it’s hard to start the conversation.