Company Directory

Company Directory

Company Directory - OpenAI

Company Details - OpenAI

OpenAI Logo

OpenAI

Website

OpenAI is an artificial intelligence research lab focused on developing and promoting friendly AI for the benefit of humanity.

CCI Score

CCI Score: OpenAI

0.65

-5.06%

Latest Event

OpenAI Joins Trump-Initiated Stargate Project

OpenAI has been named as one of the key companies—along with SoftBank, Tesla, and Oracle—set to lead the Stargate project, an initiative launched by President Trump aimed at transferring AI processing capacity to the United States. This involvement ties OpenAI to politically charged efforts that echo nationalist, authoritarian policies.

Take Action

So what can you do? Support OpenAI by shopping, spreading the word, or offering your support.

Shop Alternatives
SEE ALL
Use Your Voice
OTHER TOOLS
Investigate
Share the Score
SUPPORT CCI

OBJECTOR

OpenAI is currently rated as an Objector.

0 to +9 CCI Score
These companies deliberately avoid direct involvement with authoritarian practices. While they do not actively challenge oppressive regimes, they maintain a neutral stance to ensure they are not complicit in supporting such systems.

Latest Events

  • OpenAI Joins Trump-Initiated Stargate Project Logo
    APR
    18
    2025

    OpenAI has been named as one of the key companies—along with SoftBank, Tesla, and Oracle—set to lead the Stargate project, an initiative launched by President Trump aimed at transferring AI processing capacity to the United States. This involvement ties OpenAI to politically charged efforts that echo nationalist, authoritarian policies.

  • OpenAI Joins ROOST Initiative for CSAM Mitigation Logo
    APR
    18
    2025

    In an effort to advance child safety standards, OpenAI has joined a multi-company initiative, ROOST, alongside Google, Discord, and Roblox, to deploy AI-driven solutions aimed at combating child sexual abuse material. This move comes in the context of heightened scrutiny over safety practices on digital platforms.

  • Crackdown on Malicious AI Use Logo
    FEB
    22
    2025

    OpenAI has taken measures to remove accounts in China and North Korea that were exploiting its AI technology for harmful activities such as surveillance, influence campaigns, and financial fraud. The company used its AI-powered detection tools to identify these accounts, demonstrating proactive steps to mitigate misuse of its technology by actors linked to authoritarian tactics.

  • +70

    Technology and Services Impact

    April 11

    OpenAI’s action to remove accounts engaged in harmful activities marks a responsible use of its technology. By curbing the weaponization of AI tools for surveillance and propaganda, especially by actors linked to authoritarian regimes, the company reinforces a commitment to preventing abuses that align with fascist tactics. This proactive measure supports social responsibility and counters potential authoritarian influence.

    OpenAI Cracks Down On Malicious AI Use By China And North Korea

  • Sam Altman Warns Against AI-Powered Surveillance Logo
    FEB
    10
    2025

    OpenAI CEO Sam Altman publicly warned that AI technologies, particularly facial recognition and digital tracking systems, could enable mass surveillance by authoritarian regimes, posing a significant threat to civil liberties and democratic institutions.

  • +80

    Executive Political Engagement

    February 10

    Sam Altman’s public statement is a clear example of executive political engagement that challenges authoritarian misuse of technology. By warning against the use of AI to enhance capabilities for mass surveillance, he signals a stand against authoritarian practices, contributing positively to democratic values.

    Sam Altman Sounds the Alarm: AI's Role in Global Surveillance

  • +70

    Technology and Services Impact

    February 10

    The warning highlights concerns about the misuse of AI technology, aligning with the ethical imperative to scrutinize the societal impacts of advanced surveillance systems. This reflects positively on the company’s engagement in addressing technology's role in upholding democratic norms.

    Sam Altman Sounds the Alarm: AI's Role in Global Surveillance

  • OpenAI 2024 Political Contributions and Lobbying Profile Logo
    FEB
    06
    2025

    OpenSecrets data shows that during the 2024 cycle, OpenAI made political contributions amounting to $488,166 and spent $1,760,000 on lobbying. This level of political spending indicates significant corporate engagement in the political process, potentially influencing regulatory outcomes in ways that favor corporate interests.

  • -20

    Political Contributions and Lobbying Efforts

    April 11

    Although the detailed allocation of these funds is not provided, such political contributions and lobbying expenditures can raise concerns about disproportionate corporate influence on governmental policy. Within an environment where corporate influence may contribute to authoritarian leanings, this level of political spending is viewed negatively.

    OpenAI Profile: Summary • OpenSecrets

  • Embedded AI Censorship Contradicts EU AI Act Logo
    JAN
    04
    2025

    The article reports that OpenAI, along with Google and Microsoft, has embedded hard-coded, US cultural and corporate censorship mechanisms into its AI systems, including NSFW filters, which the report argues violate the transparency and accountability mandates of the EU AI Act.

  • -75

    Provision of Repressive Technologies

    April 11

    The report criticizes OpenAI for integrating rigid, non-transparent censorship algorithms in its AI systems that restrict user freedom and enforce a narrow US-based cultural perspective, thereby contributing to the proliferation of authoritarian control over information.

    How Embedded AI Censorship By OpenAI, Google & Microsoft Contradicts the EU AI Act – The Virtual Business School

  • OpenAI Expands Lobbying Efforts Logo
    JUN
    13
    2024

    OpenAI has expanded its lobbying team to consult with governments in the US, UK, Singapore and other countries as they grapple with AI regulation. This move signals an increased corporate effort to influence regulatory outcomes.

  • -40

    Political Contributions and Lobbying Efforts

    April 11

    By expanding its lobbying team, OpenAI is deepening its engagement in political processes to shape regulatory policies in its favor. This increased corporate influence on regulation raises concerns about potential regulatory capture and diminished public accountability, aligning with problematic corporate political behavior.

    OpenAI expands lobbying team to influence regulation

  • Sam Altman’s Political Donations for AI Regulation Logo
    DEC
    31
    2023

    In 2023, OpenAI CEO Sam Altman distributed nearly $300,000 in political donations across Democratic candidates, committees, and parties in 39 states, supporting lawmakers focused on ethical and transparent AI regulation.

  • +60

    Political Contributions and Lobbying Efforts

    April 11

    CEO Sam Altman's donation strategy in 2023—totaling nearly $300,000 and directed mainly to Democratic candidates and committees advocating for AI oversight—demonstrates a clear commitment to supporting reform-minded political forces. This approach aligns with progressive, anti-fascist efforts by backing lawmakers who prioritize regulation that protects public interests and counters authoritarian tendencies.

    Sam Altman Politcal Donations: Gave to Dems Shaping AI Bills

  • +60

    Executive Political Engagement

    April 11

    Sam Altman’s targeted political contributions, reflective of his role as OpenAI's CEO, underscore his executive engagement in shaping public policy. His contributions to politically active Democratic committees and lawmakers committed to AI oversight signal a deliberate effort to influence policies in a way that supports democratic values and resists authoritarian impulses.

    Sam Altman Politcal Donations: Gave to Dems Shaping AI Bills

  • OpenAI Workers Force CEO Reinstatement Through Collective Action Logo
    NOV
    23
    2023

    On November 23, 2023, more than 730 of OpenAI's estimated 770 employees signed an open letter demanding the reinstatement of CEO Sam Altman after his dismissal by the board. Faced with the prospect of a near-mass resignation and investor pressure, the board chose to rehire Altman, a decision that highlighted the power of worker collective action in challenging top-down decision-making.

  • +70

    Labor Relations and Human Rights Practices

    April 11

    The overwhelming collective action by OpenAI's workers—over 95% of the workforce threatening to resign unless CEO Sam Altman was reinstated—demonstrates a robust exercise of labor rights and solidarity. This event is a strong example of worker power challenging hierarchical decisions in an industry often criticized for devaluing human labor. The workers’ unified stance promotes fair labor practices and resists authoritarian management tactics, justifying a positive score in labor relations.

    OpenAI is a threat to labor, but its employees staged one of the most successful collective actions in tech

  • OpenAI Implements Safeguards Against AI Misuse Logo
    OCT
    14
    2023

    A report highlighted that OpenAI, among other tech giants, has imposed safeguards on its AI-based systems to mitigate disinformation, censorship, and surveillance, representing a proactive measure to counter authoritarian misuse, even though concerns remain over the potential for these safeguards to be easily breached.

  • +40

    Technology and Services Impact

    April 11

    OpenAI's implementation of safety protocols on its AI tools aims to curb the amplification of disinformation and unauthorized surveillance. Although the report notes that these measures can be bypassed, the act of instituting safeguards is a positive step toward reducing the misuse of technology in ways that could support authoritarian censorship and manipulation, aligning with anti-fascist and progressive principles.

    AI ‘supercharges’ online disinformation and censorship: report

  • OpenAI Exploited Ghost Workers Logo
    OCT
    01
    2023

    The article alleges that OpenAI, alongside other tech companies, employs a hidden workforce of AI ghost staff who suffer from low wages, lack of benefits, and poor working conditions. US lawmakers have initiated a probe into these practices, raising concerns over labor exploitation in the AI sector.

  • -80

    Labor Relations and Human Rights Practices

    April 11

    The article details exploitative labor practices involving AI ghost workers at OpenAI. Workers are reported to be underpaid, overworked, and denied basic benefits, a clear violation of labor rights that underscores a broader pattern of unethical business practices in the technology sector.

    Unmasking the 'Ghost' in AI: Exploitative Labor Practices Worry Tech Giants

  • OpenAI Lobbies EU on AI Act Amendments Logo
    JUN
    20
    2023

    OpenAI and its CEO, Sam Altman, have been accused of lobbying EU officials to water down provisions in the upcoming AI Act to avoid its general-purpose AI systems being classified as high-risk. Documents obtained through freedom of information requests reveal that OpenAI proposed amendments that were later incorporated into the final version of the law, aligning with similar moves by major industry players.

  • -60

    Political Contributions and Lobbying Efforts

    April 11

    OpenAI engaged in targeted lobbying efforts aimed at reducing regulatory burdens under the EU AI Act by arguing that its general-purpose AI systems should not be labeled high-risk. This undermines robust public oversight and transparency, favoring corporate interests over community safety and ethical governance.

    OpenAI allegedly lobbied EU to avoid that its AI-powered tools be considered "high-risk"

  • -40

    Executive Political Engagement

    April 11

    CEO Sam Altman's public appearances promoting global AI regulation contrast sharply with behind‐the-scenes lobbying efforts to dilute the same regulatory framework. This inconsistency raises concerns about corporate transparency and genuine commitment to public safety.

    OpenAI allegedly lobbied EU to avoid that its AI-powered tools be considered "high-risk"

  • Exploitation of Kenyan Labor for ChatGPT Moderation Logo
    NOV
    01
    2021

    In November 2021, OpenAI contracted with outsourcing firm Sama to have Kenyan workers label and filter toxic content for ChatGPT. Workers were paid between $1.32 and $2 per hour and were exposed to graphic, traumatic material without adequate mental health support.

  • -75

    Labor Relations and Human Rights Practices

    April 11

    The article details how OpenAI's use of outsourced Kenyan labor under extremely low wages and poor working conditions resulted in significant mental distress and exploitation of vulnerable workers. These labor practices violate basic human rights and ethical labor standards, underscoring a negative impact on worker rights, a key anti-fascist and progressive concern.

    OpenAI and Sama hired underpaid Workers in Kenya to filter toxic content for ChatGPT

Industries

541511
Custom Computer Programming Services
541512
Computer Systems Design Services
541715
Research and Development in the Physical, Engineering, and Life Sciences (except Nanotechnology and Biotechnology)
611420
Computer Training
511210
Software Publishers