Solving Big Tech abuses with a GPT? Sure thing!
I'm still experimenting with my iCopilots-GTP, and I just fished back an article from 2022 discussing ways to regulate Big Tech platforms and limit their abuses. I was referencing an article from Anil Dash that gave his version of the Big Tech playbook to avoid sanctions and abuse our regulations, laws, and plain common decency whenever needed.
Back in 2022, I didn't have the time to go further than saying:
Every single one of these steps is a regulation entry point. Work on regulations on two or three of them, and you slow down these now mindless machines. With four or five, you pretty much break their cycle of negativity and force them to be responsible for their content.
Smart? Maybe.
But now, armed with my trusty (?) GPT, I can go further.
First, I ask it to condense the 24 ways Big Tech abuses the system into only 12 regrouped by theme, and then create a table with both the original 'abuse' and then would be a regular solution... Tweak everything for 5 min, et voilà:
|
On-Going Abuse |
Possible Regulation |
1. Platform transparency |
Platforms claim neutrality while significantly influencing culture and public norms. |
Require platforms to disclose how they shape cultural norms and provide detailed impact reports, reviewed by independent agencies. |
2. Growth-driven culture |
Staff incentives are aligned solely with growth, ignoring societal impact and creator welfare. |
Legislate balanced performance metrics for staff, integrating societal and creator well-being into growth targets. |
3. Neglect of moderation |
Minimal investment in safety and moderation allows harmful content to go unchecked and thrive. |
Impose minimum investment standards in trust and safety teams, mandating proactive and comprehensive moderation practices. |
4. Cultural erosion |
Marginalized creators who were instrumental to early success are sidelined or driven away. |
Enforce laws that protect and support marginalized creators through equitable visibility and funding opportunities. |
5. Copyright and monetization |
Platforms exploit original content and mishandle creator rights, often monetizing unlicensed content. |
Strengthen creator rights, making it illegal for platforms to monetize unlicensed content and ensuring fair compensation for original works. |
6. Algorithm bias |
Algorithms favor established creators, perpetuating inequality and blocking opportunities for new voices. |
Mandate transparency and independent audits of content algorithms to ensure equitable treatment of new and diverse creators. |
7. Exclusive content deals |
Platforms overpay a few superstars, neglecting the development of a broader, sustainable creator ecosystem. |
Impose caps on spending for superstar content deals and incentivize platforms to invest in developing diverse creator ecosystems. |
8. Exploitative practices |
Platforms replicate harmful, extractive behaviors from traditional media industries, worsening creator exploitation. |
Prohibit exploitative contracts and revenue-sharing schemes, ensuring platforms uphold fair practices for all content creators. |
9. Internal dissent suppression |
Employee concerns about ethics and content safety are ignored or punished, silencing internal critics. |
Protect whistleblowers and mandate platforms to establish channels for meaningful employee input on ethical concerns. |
10. Crisis mismanagement |
Crises are managed with superficial PR rather than addressing structural accountability, even when harm occurs. |
Hold platforms accountable with fines or structural mandates if they fail to respond to crises responsibly and transparently. |
11. Harm-driven engagement |
Platforms use emotionally manipulative content to drive engagement, disregarding its harmful societal effects. |
Enforce ethical engagement practices, requiring platforms to consider the impact of content on societal well-being. |
12. Regulatory avoidance |
Platforms resist regulation, misuse free speech rhetoric to evade responsibility, and lobby against accountability measures. |
Tighten regulations, removing “free speech” as a blanket defense for harmful amplification, and establish stronger accountability mechanisms. |