• Fast Principles
  • Posts
  • From Steam Engines to Silicon Minds: Conscientiously Steering AI's Development

From Steam Engines to Silicon Minds: Conscientiously Steering AI's Development

Fireside chat at the Commonwealth Club with renowned historian Yuval Noah Harari and tech ethicist Aza Raskin discussed the implications of AI on humanity.

The Unfolding Impact of AI on Society

In a recent fireside chat at the Commonwealth Club, renowned historian Yuval Noah Harari and tech ethicist Aza Raskin discussed the profound implications of AI on the fabric of humanity. Their conversation offers valuable insights for technologists grappling with the rapid advancement of AI and its potential societal impacts.

Historical Parallels and the Pace of Change

Harari drew a compelling parallel between the current AI revolution and the introduction of railroads in England. He noted that while life looked similar two years after England was connected by rail, the changes became dramatic over longer timescales. Similarly, we're only two years into the AI revolution, and life still looks familiar. However, the potential for radical change in the coming decades is immense.

This perspective is particularly relevant given recent statements by industry leaders. Sam Altman, for instance, has estimated that we might achieve super-intelligence within the next 1,000 days. As developers and designers, we must consider how our work today might shape society 10, 20, or even 50 years from now.

The Intersection of Technology and Society

A key theme of the discussion was the complex interplay between technological innovation and societal structures. Harari emphasized this point, stating:

"The worst problems of new technology don't come from the glitches of the technology, they come when the technology meets society."

Yuval Noah Harari

This observation underscores the need for technologists to think beyond mere functionality and consider the broader implications of their creations.

The Transformation of Information Ecosystems

One of the most striking examples of AI's impact is the transformation of our information ecosystems. Social media algorithms, which are essentially AI agents, have become the de facto news editors for many people. These systems, designed to maximize engagement, have inadvertently promoted content with tones of outrage, fundamentally altering public discourse.

This shift raises critical questions about the future of human-generated content. As Harari and Raskin pointed out, we're approaching a future where human thought could become a minority in our global knowledge sphere. This trend is exemplified by Meta's recent announcement of AI-generated content for Instagram feeds.

Safeguarding Human Thought and Culture

In light of these developments, the speakers emphasized the importance of preserving human thought and culture. They suggested several strategies:

  1. Celebrating and nurturing human-generated content

  2. Aligning AI agents to act as fiduciaries for human interests

  3. Implementing an "education tax" on AI labor to invest in human development

1. Celebrating Human Creativity

As AI-generated content becomes increasingly prevalent, it's crucial to actively celebrate and nurture human-generated content. This means creating platforms and incentives that highlight uniquely human perspectives, experiences, and creativity. Further, we can label human created content, similar to a verified badge on X (Twitter). As technologists, we can design systems that amplify human voices and ensure they're not drowned out by the sea of AI-generated information. This could involve developing AI tools that augment human creativity rather than replace it, or creating algorithms that prioritize diverse human viewpoints in content curation.

2. AI as Fiduciaries

The concept of aligning AI agents to act as fiduciaries for human interests is a paradigm shift in how we approach AI development. Instead of creating AI that simply follows commands or optimizes for engagement, we should strive to develop AI systems that actively work in the best interests of humanity. This involves complex ethical considerations and may require new frameworks for AI decision-making. As developers, we need to think beyond functionality and consider how our AI systems can be designed to prioritize long-term human wellbeing over short-term metrics.

3. Investing in Human Development

The idea of an "education tax" on AI labor is an innovative approach to ensuring that advances in AI technology directly contribute to human development. As AI systems take over more tasks, a portion of the economic benefits could be channeled into education and training programs for humans. This could help mitigate the potential job displacement caused by AI and ensure that humans can continue to develop skills that complement, rather than compete with, AI capabilities. As tech professionals, we can advocate for and help implement such systems, ensuring that the rise of AI leads to a more educated and adaptable human workforce.

The Imperative of AI Safety

AI pioneer Stuart Russell, noted that the ratio of investment in AI development to safety practices is approximately 1,000:1. As technologists, we have a responsibility to advocate for a more balanced approach.

There is currently 1,000:1 gap on spending on AI capabilities compared to safety.”

Stuart Russell

The discussion highlighted the critical need for investment in AI safety. But, the critical question remains what is the appropriate amount of investment to make in safety compared to innovation? Drawing an analogy from biology, humans spend about 20% of their energy on the immune system, the speakers suggested a similar proportion should be dedicated to AI safety.

Ethical Considerations for Future AI Development

The conversation concluded with a provocative thought experiment: Imagine if we had banned business models based on maximizing human attention in 2012. Today, we face a similar crossroads with AI and human intimacy.

As developers and designers, we have the opportunity – and the responsibility – to shape the future of AI in a way that respects human agency and promotes societal well-being. This requires not just technical expertise, but also a deep understanding of ethics, sociology, and human psychology.

"Will the risk undermine the fabric of society so we cannot enjoy the benefits of the innovation?"

Aza Raskin

This question should be at the forefront of our minds as we continue to push the boundaries of AI technology.

By fostering interdisciplinary collaboration and maintaining a clear-eyed view of both the potential and pitfalls of AI, we can work towards a future where technological advancement and societal well-being go hand in hand.

The AI Dilemma: Navigating the road ahead with Tristan Harris

Yuval Noah Harari’s A.I. Fears

NEXUS: A Brief History of Information Networks from the Stone Age to AI

Until next time, keep innovating and stay curious!

Enjoyed this newsletter?

Forward it to a friend and have them signup here.

How'd we do this week?

Login or Subscribe to participate in polls.