Getting Started: How to Learn About AI
Or what happens when you google "free AI course"
In my first post, I mostly talked about why I pulled the trigger to register a (rather expensive) *.ai domain, what I think about the “metaverse” in general — guess who else jumped into that discussion:
… sorry for digressing, this is a recurring theme in my writing — and especially its “openness”; how it is connected with XR, the digitization of everything, and AI (I believe a true metaverse cannot exist without AI); and that all the concepts involved in the process towards a metaverse need to be understood by the people involved. I also linked a few fun AI tools to play around with, so head over to the first post if you want to go down that rabbit hole.
And Now Begins My AI Learning Journey
But wait, let me digress once more. On my trip to Marseille last week, I was solving Brilliant.org math puzzles and listening to this hilarious German podcast (I know that hilarious and German sounds wrong togehter in one sentence, but what can I say). Anyways — one of the two podcasters, I believe it was Florentin, said that random generators would rule the universe instead of anything AI or robot. The two then went on to fantasize about a completely randomly generated magazine, and I happily admit that I laughed a lot about that since I do have that weird sense of humor you need for DAS PODCAST UFO. And it is not even that absurd, if you think about it.
Can I even say if the funny brand name and logo generation tools I linked last time are based on AI, or are they just randomly spitting out values they were fed before, i.e., are they a scam?
What does that mean for teaching non-engineers what AI is and how it can be used in all areas of life?
If we look back in the history of AI, the first AI “scam” was ELIZA1:
ELIZA is an early natural language processing computer program created from 1964 to 1966 at the MIT Artificial Intelligence Laboratory by Joseph Weizenbaum. Created to demonstrate the superficiality of communication between humans and machines, Eliza simulated conversation by using a "pattern matching" and substitution methodology that gave users an illusion of understanding on the part of the program, but had no built in framework for contextualizing events. Directives on how to interact were provided by "scripts", written originally in MAD-Slip, which allowed ELIZA to process user inputs and engage in discourse following the rules and directions of the script. The most famous script, DOCTOR, simulated a Rogerian psychotherapist (in particular, Carl Rogers, who was well-known for simply parroting back at patients what they had just said), and used rules, dictated in the script, to respond with non-directional questions to user inputs. As such, ELIZA was one of the first chatterbots and one of the first programs capable of attempting the Turing test.
I said “scam” because from today’s perspective, ELIZA is a rudimentary chatbot with a limited set of possible responses within the framework of scripts and rules based on natural language processing — but in the 1960s, this seems to have been the state of the art in human machine communication.
Today’s simpler chat bots (the predecessors to conversational AI as used in smart home devices) still stick to scripts and are not truly “intelligent” (but there are AI chatbots as well, of course — just to make the whole definition work even harder). But how can a regular, regular person tell the difference between a tool that is just following scripts (or: randomly generating things), and a tool that is based on some kind of AI, including ML?
The answer sounds like my relationship status: It’s complicated.
All I can say right now that the prevalent overpromising of what AI will be capable of in the near future (in a positive or negative sense) and the underdelivering of both “true” AI and tools only labelled as AI that may be interpreted as “scam” leads to unhelpful emotionalism (ah, those -isms) in different media:
Is AI learning a scam? (Jul 15, 2020) — ok, that one is a bit on the edge of a conspiracy theory
Why “AI” Is A Fraud (Sep 28, 2020)
The superiority of AI: the scam of the century (Mar 31, 2020)
The Artificial “Intelligence” Scam Is Imploding (Feb 17, 2020)
The crux is that companies of course have to make those promises as background noise to build the AI hype on. I wasn’t a consciously enough thinking human when the dot-com bubble built up and burst, but I am definitely not the first one to sense we are in the middle of another AI-crypto-digitization-robot bubble (that inflates slower due to longer tech cycles) with the whole metaverse narrative emerging since… 2021?2 There is a strange gap on the Wikipedia page between the 2006 launch of Roblox and the 2018-2021 metaverse-ish launches on the Wikipedia page, so I’d need to take a closer look to remove that big question mark.
In any case, the pressure is real for AI companies, so you have to make some bold statements to attract investors, even though you know you can’t deliver on the promises. And even though you know there are freeloaders out there that will capitalize on opportunities to sell “bad” AI products, or “scam”. And then everything AI gets roasted (yeah, why hasn’t a robot already taken my job, that can’t be so hard, right?! Just kidding, I know I am absolutely irreplaceable) and the public perception of AI tilts towards a negative stance.
And in the end, the public has to adopt whatever AI product it is companies will be throwing at it, or the product just dies with its hype cycle. The investors bring the money to develop those products, but the public needs to be able to integrate them seamlessly into their lives to make them a success (like smartphones and social media). Even the most disruptive innovation fails if people just don’t want to use it.
End of digression. At least for now.
Hey, Google — Where Can I Learn About AI?
Full disclosure: I don’t use any voice assistants, I am happily typing everything I want to know from Google (and other search engines, e.g, DuckDuckGo because #birb… er, privacy). Yes, it is a bit naïve to just google “free AI course”, because the number of results is just overhwelming. In fact, we are so conditioned to see a large number of pages in the Google search that it somehow feels like a glitch in the matrix when you get only a handful of results for whatever it is you are looking for.
Nevertheless, I jumped into that rabbit hole of Google search results and would like to present a few resources I have identified as useful:
Alright, let’s get started!
When I told A. about my idea to “learn AI”, his absolutely justified question was: cool, but what exactly do you want to learn? Do you want to be a programmer? — I violently shook my head, since there is no way I can compensate all the years of not programming with a few programming courses. I want to understand the fundamentals and use cases, and I want to be able to communicate with all stakeholders involved in AI projects, I told him. So he nudged me towards Brilliant, which I had never even heard of before.
Brilliant has 60+ courses on math, science, and computer science — including the ones that will direct your learning towards AI-related topics. I purchased the premium version for around 120 €/year and this is my progress since October 16:
I. Am. So. Hooked. I should probably follow one of the recommended course paths instead of being all over the place, but I love to dive into different topics at the same time. I have to activate all my nonexistent science knowledge to understand what Brilliant teaches me about neural networks and I feel like the greatest math and physics noob of all time doing the basic, basic stuff, but it fascinates me in ways I never dreamt of at school (Although logic has been my thing ever since I used Aristotle’s Logic in my Master’s thesis about cyberculture).
So here’s a recommendation: Got get Brilliant if you want to learn about the foundations of computer science that ultimately lead to AI.
Unfortunately, Google’s own AI web presence does not have a learning path to follow, just a whole lot of categories and rabbit holes to fall into. Its vision is “Bringing the benefits of AI to everyone”, which tells me they spent some time and effort to make sure laypersons can grasp the concept of AI they want to communicate.
While it is interesting to read about their responsibilities and research, I went straight to the education part. It has a bunch of posts and some categories to filter them by, but I recommend going to their guide “Using AI for social good”3 first. It begins with some Google Cloud AI Adventures videos, the first of which is What is Machine Learning? with Yufeng Guo.
At episode 56, Priyanka Vergadia takes over. I only watched the first few episodes so far, but I feel they helped me internalize a few terms and principles that I wasn’t fully aware of before. I stopped at video 5 which is about model visualization with TensorBoard, since I haven’t really gotten into TensorFlow yet and I don’t feel it makes sense to just binge-watch them like the latest season of Lucifer.
I then clicked on the second link in the “Using AI for social good” section, which led me to a collection of AI experiments. Its header reads:
AI Experiments is a showcase for simple experiments that make it easier for anyone to start exploring machine learning, through pictures, drawings, language, music, and more.
This short introduction tells me it’s easy to use AI, since we are “only” exploring machine learning, a “type” of AI:4
Machine learning (ML) is a type of artificial intelligence (AI) that allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so. Machine learning algorithms use historical data as input to predict new output values.
I digressed from that Google website back to Google to find out what was the exact difference between ML and AI, so this is where I learned that ML is a subset of AI and that sophisticated ML deep learning models “power today’s most advanced AI applications.” To sum things up,…5
AI solves tasks that require human intelligence [such as thinking, reasoning, learning from experience, and most importantly, making its own decisions] while ML is a subset of artificial intelligence that solves specific tasks by learning from data and making predictions [without explicitly being programmed or assisted by domain expertise].
Back to Google.
I only explored the first collection so far, AI + Writing. And it is so beautiful! I can’t wait to get my hands on an AI so I can feed it with two years of triathlon-related blog posts to see with which kinds of blog post lines it would come up. Because this is essentially what the project was about: the input for the AI was texts authors had written or were working on, and the AI’s output was new sentences that most of the times didn’t make sense in a traditional way — but helped authors overcome writer’s block or inspired them to reflect on what they were writing.
It reminds me of the most hilarious movie script reading I ever watched:
It is based on this tweet by writer Keaton Patti:
He even “forced” a bot to write a book after he fed it with thousands of pages from different literary genres. It’s hilarious. And there is even a serious, animated version of the first page from the Batman movie script:
Back to the serious business. Now if you close the “AI + Writing” collection and the “AI Experiments” collection, you realize the introduction to “Using AI for social good” guide also has a rather sloppily organized Machine Learning glossary. It feels a little random and some terms have funny pictograms next to their names, but at least you can CTRL+F the entire page.
Apart from that, the course “Using AI for social good” seems to be a good starting point to learn more about Google’s “efforts to apply AI to humanitarian and environmental challenges.” It has in total seven chapters (like the introduction where I found AI + Writing) and a ton of posts to read. Back on the main page of Google AI’s education section, there are in total 37 resources to choose from: courses, sample code, videos… It just drives me crazy that the one guide is actually a concept overview, that courses has ten courses but also two guides and one competition (which does not have a separate label), and that documentation has two additional guides. Ugh, why?!
I feel like this whole website has a lot of potential to teach laypersons about AI, but it is too messy and does not provide orientation. But it’s free and I want to go through all of it, especially since it has both theoretical approaches and real-world examples.
I am absolutely pro paying for great online courses and resources, but it can’t hurt to start out with a few freebies. Actually, there are a lot of websites that already list the “best” or “best free” AI online courses, some even with the very useful addition “in 2021”. But I don’t think the basics of AI are changing so quickly that you need to be skeptical if a course or resource has not been published yesterday. I do however think that certain use cases and practical applications of AI are quickly changing as technology evolves iteratively in small steps. Think of how SEO has changed within the past few years — a guide from 2018 would be completely useless today.
The free online courses I will look into are the ones recommended in this random article I found when googling “free AI course”, and then some more:
Stanford's free AI course on Coursera by and with Andrew Ng
AI for Everyone by DeepLearning.AI on Coursera, also with Andrew Ng
Elements of AI is a free standalone course from the University of Helsinki
Forbes also listed a few free ones in 2020 (and in 2018 — but there is a warning that the article is more than three years old, which I find really useful), and Google tells me there’s about 708.000.000 results for my search query, but I think we’ll be fine with those for now. This is already more than enough for now and it probably doesn’t make sense to complete all of them anyways.
I am not sure when I will write the next monster post, but I want to tackle the question: Can I have my own AI?
Stay tuned and hit “subscribe” if you liked this post 👇