Tag: tech

  • book review: Empire of AI

    book review: Empire of AI

    Empire of AI is an informative and a chilling read by the AI expert and investigative journalist Karen Hao. The book dives into the inside story of OpenAI and Sam Altman long before ChatGPT was a thing. OpenAI started as a nonprofit organization in the modern, contemporary theater that is Silicon Valley with Sam Altman and Elon Musk (who left not soon after) convinced that they were working towards building safe AI, as opposed to rogue AI, which was yet an imaginary threat at best and a collective hallucination at worst. They were convinced that it was just a matter of time before it happened and that they had to control the AI space and be the paragon of what good AI looks like.

    A core theme of this book is belief. Belief in deep learning. Belief in AGI. Self belief. How belief mobilizes and incites. Who is and isn’t to be believed.

    The author narrates the story of the company and the many characters involved in getting us to where we are today. With sharp observations and opinions, the author shares with us the conversations she’s had with OpenAI employees and others working in the industry. This book confirmed my suspicions that nobody knows what they are doing and yet I was gobsmacked by the sheer on va voir type of energy from the top brass. In one brief recounting, one of them is performing a fire ritual with a wooden effigy.

    ChatGPT’s success was surprising to the OpenAI team since it was a basic version of what they were using internally. Their research was all about proper AI (not just glorified chat bots) and”AGI”, which mind you, nobody knows what that means, but the execs believed that it could be done. So the teams dabbled in robotics, video games, and probably other research. With ChatGPT blowing up, the company was forced to move compute resources from research to this rising star.

    Now comes the classic supply and demand issue. They were forced to add more compute power to it and keep bettering it to meet requirements so this cash cow keeps generating revenue, to get funding from Microsoft for chips, to keep competitors (Google, Meta) at bay while wading through internal politics. The internal politics largely included the Safety team not having enough time and resources to conduct proper tests or follow proper procedures. Sam Altman, we learn, is famous for being charming and convincing, and telling people exactly what they want to hear, while keeping his motivations concealed. The Amodei siblings who were initially heading the Safety team were completely disillusioned and left OpenAI to form their own company, Anthropic.

    OpenAI continued to struggle and succeed simultaneously. More importantly, the author shows us the degree of fanaticism and delusion of some of these execs at the highest levels, the ones who are responsible for making decisions that are affecting millions of people. The things they say and believe has me questioning the reality we live in. As if I don’t question it enough already. It seems to me as though they live in a bubble of their own making and a world so far removed from the ground.

    Another point that the author raises is the lack of independent research. Graduate and research students often join the tech companies to make a living. Their focus is then only on what the companies want, on what will bring profits. This leads us to wonder whether there can be alternative solutions and methods of implementing AI that we are missing out on because of the capitalist monopoly games.

    In stark contrast to the shiny Silicon Valley offices, the author walks us through small cities in Kenya and Chile, where people are forced to do data annotation for cents, often being exposed to harmful scenes, where people are forced to eke out a living mining raw materials with little benefits, where companies like Google and Microsoft want to build data centers so they can drink up the water resources that the residents struggle to get. In general, the companies target those countries with unstable economics so the people are willing to work for pennies. In addition to the exploitation, they increase pollution of all elements in these areas. That’s why it’s laughable when they talk about solving climate change with AI. These are just pretenses, they never cared about climate change. The pretenses will drop as soon as they have what they want.

    All is not lost as activists continue to fight the giants with what they have. Some have seen success but the problem continues to exist. Their accounts are eye-opening and heart-warming, showing us a glimpse into their lives, their work, and their motivations, inspiring us readers to collectively become aware and take actions against the crimes of the tech companies.

    The author ends with possible frameworks that can help build a better tech future that benefits all. This includes redistributing knowledge, resources, and influence.

    Whether or not you work in the tech industry or use AI at all, I’d recommend reading this book. I’d especially recommend this book if you are working in the tech industry and are caught up in the AI race. Reading this book would be the equivalent of touching some grass. The issues highlighted in this book are not only specific to the current AI industry, these have been long-standing problems as the tech industry upholds and propagates exploitation and colonization. And those of us in the tech industry must decide just where we are directing our energy. In my opinion, the current narratives that companies like OpenAI push about AI taking all jobs and going rogue and being harmful to humanity are problems of their own making. They talk about an imaginary Universal Basic Income while people continue to suffer, yesterday, today, and tomorrow. Their vision of AI is unoriginal and unsustainable.

    Bottom line, there’s no doubt that AI tech and the computational power is great and valid cases exist where this power can be hugely beneficial. But any benefit at the cost of human lives is no benefit at all. We cannot continue to exist the way we have been, and we have much more power and influence than we are led to believe.

  • the AI problem

    the AI problem

    I work in tech and every time someone mentions AI, I want to take a shot of vodka or a drag of recreational drugs. I would be long gone if I actually did this. But I work remotely, so I can get up and walk around to feel less frustrated. At least I get some movement in. For the past couple years, anything and everything is called AI. As a writer, and a technical writer at my day job, I’m annoyed by the lack of specificity. It’s okay to call it a chat bot, or an automation using some Python scripts, or a just a feature that summarizes batches of text. It’s straight-forward, easy to grasp, and not misleading. This AI mirage created using words irks the part of my brain that wants clarity.

    However, my problem is not just with the words associated with AI. It’s everything.

    the user problem

    I’ve seen a few common ways in which it (typically chat bot) is used: search engines, creating summaries, drafting, asking for legal and life advice, a companion to talk with, coding, generating images, videos, and audio, study help, etc.

    Using it as search engine leaves out context on who is providing that information. And that matters. I understand why one may not want to go through 5 different articles to find information on something trivial, but using it for every search creates the risk of falling for false information when we don’t check the sources. None of us are immune to falling for misinformation, as much as we’d like to think we are. Chat bots are also known to “hallucinate” and give you outdated information. Working with this is detrimental to you, as a user.

    Using chat bots to create summaries of lengthier documents, essays, articles, and books is again problematic. Sure, who hasn’t read a summary of a movie or a book we couldn’t bother to complete. But when we rely on it as the main source of information, we are missing out on the possibility that that particular piece has to offer. We miss the details or misinterpret, we miss giving ourselves the chance to have learnt something new to give us a different perspective or inspiration. We miss out on all these exciting things.

    And as far as generating images is concerned, why do we need AI to generate fake human pictures and videos? We can well imagine the wrong ways in which it can and will be used. How can this possibly be regulated? I can think of better uses: generating images for scientific uses, simulations in engineering fields. Sure, that makes sense. These are actual use cases. But to generate memes or “art” is wasteful and not even fun.

    When I think of using chat bots or AI tools to create work email drafts, or to take notes, or to study, at first, such actions seem justifiable. A large part of the education system prioritizes grades and making money over learning, and so people will inevitably try make their work easier. Why would they use their time and energy to do tasks that make the days harder? With AI tools and automation, we can do basic and menial tasks faster and more accurately. But these are only the details. The actual problem lies underneath.

    the capitalism problem

    I don’t think that blaming individuals is going to change anything. The problem is with the capitalist system itself that alienates a lot of us from our work, from our lives, pushes us into isolation, and breeds unnecessary competition. We could find joy in our work if we worked less. If we didn’t have to worry about our continued sustenance. In fact, it would be more efficient. It’s not a pipe dream, there’s plenty of evidence that our current way of working and living is stressful and harming ourselves and the environment.

    When AI is pushed onto us in every aspect, we must also ask the question why this is so. You must have heard of the phrase, “if you are not the buyer, you are the product”. I believe it is the same case here. When you use these tools, you are testing them, training the LLM models, providing them data. One may think that this benefits us too, but such benefits are not justifiable.

    From the medical field to manufacturing to research, AI’s computational and analysis capabilities can be hugely beneficial. But the way it is implemented leaves a lot to be desired. It’s only a glorified chat bot with access to media made by humans. And the access that’s given is questionable itself. Companies like Meta, Google, and Microsoft can read and use our data to train their LLMs and call it “policy” or “terms of use”. For example, I recently noticed that Instagram translates reels by changing the speaker’s face, their mouth movements to match the translated language. It’s not a feature, it’s creepy, ugly, and unethical. This did not come out of nowhere. You know that many teams at Instagram would have to be involved in working on and releasing this feature. There would have been meetings. Do those employees know how creepy, ugly, and unethical this is? It’s not me versus them, we are on the same side, as employees working in tech. Though I wonder what they think while they work on this.

    It’s the common adage: If I don’t do it, someone else will. The justification that we give ourselves to continue participating. After all, why would you shoot yourself in the foot for some morals? Morals don’t fill the stomach, do they?

    the feeding problem

    AI and the LLMs that run it work with what humans have created. The art, research, language, media, everything is fed to it and it gives us answers according to what it has access to. So, if the quality of the input is bad, the output follows.

    What I mean is this: let’s consider the medical field. Let’s say we give it all the information we have now. We know that the data is biased against women and people of color. Aren’t we propagating the same old issues? If we are going to use AI, we need to feed better quality of information. This means that in order to implement AI tech here, the prejudices and biases that humans have must be addressed. The same goes for other fields.

    The security issue is another sharp knife cutting at the murky AI blob.

    the ethical and sustainability problems

    From how i see it, conversations on AI ethics and sustainability cannot really be separate. To illustrate, have you read the Anatomy of an AI System? This 2018 paper maps the journey of Amazon’s Echo device from birth to death. The supply chain logistics to produce these devices is shown to be terribly complex. We’re aware of the brutal mining processes, the exploited human labor, and the pollution that runs our modern tech lifestyle. It is a stark reminder of how we lack any meaningful regulations compared to the advancements in AI features.

    When we picture AI ethics, we usually imagine robot laws, or whether or not a machine can be held responsible, or about the abundant security and privacy concerns surrounding the unimaginable amount of data we generate today. But it would be good to remember that AI ethics must also include how AI is built, along with how it is used. Before the boom of the current AI tech, we already had AI in the form of chat bots and smart devices. Manufacturing all our devices comes at a great cost—human life and earth’s resources. Training of data sets, annotating images, moderating content is carried out by human workers who are often paid in misery and few cents. All the processing power requires more servers which require water to operate at optimal temperatures. With talks of building more data centers around the world (especially in the Global South), it is the regular people who will continue to bleed. What is AI to them when they suffer physically and mentally? What is progress to them?

    I’m constantly reminded of the short story The Ones Who Walk Away from Omelas by Ursula Le Guin. It is the essence of our reality in the form of a short, fictional story. If our convenience depends on someone else’s struggles, what should we do?

    Every prompt you send, every response you receive, every computation that any model must do takes up a lot more processing power than your average search. When these options are forcefully embedded into our daily digital lives, just how much energy is consumed in a day? A month? As we go on, if the current trajectory of AI continues as it is, what is the price we must pay for further advancement? It is true that our collective Internet usage also take up a lot of power. Hosting this site, surfing the net, streaming music, everything takes up electricity and water. So why only blame AI for it? As mentioned previously, most of the tech we have today is built on earth’s resources and cheap human labor. The problem already exists. Increasingly shoving AI into everything and operating more data centers exponentially aggravates this existing problem. We could be working towards decolonizing tech and making it more sustainable, enjoyable. Enforcing AI is taking us in the opposite direction.

    One other point on ethical use is intellectual property. AI is used to generate audio, video, text in the style of a particular singer, director, or writer. Me being inspired from another artist and imitating their work is different from stealing it to generate slop. Where does it end? Alarm bells are already ringing as AI is used to create fake videos of politicians, fake news items, pornography.

    the future problem

    Let’s take a minute to think about future (and current) generations brought up in such environments. If we don’t make it clear how AI should and shouldn’t be used, the problems we see today will only be aggravated. When I was in school and using the Internet on a desktop computer was becoming mainstream, I remember my parents warning me to be cautious about the sites I use, the information, and the information I give. There were also plenty of articles about the do’s and don’ts on Internet surfing and school lessons. Though I did end up on piracy sites but that’s neither here nor there. I’m afraid that a good percentage of the newer tech users aren’t built on the curiosity and fun. I don’t blame them, how could they know?

    It is high time for updated comprehensive courses on using AI responsibly for the common person. This requires thorough explanation behind why we condemn it and providing alternatives. Most of all, it requires you and your unceasing criticism of and protests against corporations and governments.

    Unfortunately, I haven’t found much that inspires optimism, but we can only move up from here. I want a better tech space where we make informed and inclusive decisions. I cannot deny that AI and tech, have great practical uses. That’s why, I want this to benefit all of us.