Congress restricts staff usage of ChatGPT, citing privacy concerns

The limitations centered around how much it could be used and privacy concerns. 

The House is set to put limits on how Congressional staff members may utilize ChatGPT, and has already disseminated a notice to inform the concerned parties.

The notice told all congressional staff that the offices are "only authorized to use the ChatGPT Plus version of the product." The plus version is subscription-based at $20 a month. 

The Committee on House Administration set conditions for its use in offices. The limitations centered around how much it could be used and privacy concerns. 

The notice states that "[u]se of the product is for research and evaluation only." However, offices cannot "incorporate it into regular workflow."

In addition, regarding privacy concerns, ChatGPT is "only to be used with non-sensitive data." The notice gives the example of pasting blocks of text into the chatbot "that have not already been made public." 

Privacy settings on the AI chatbot are required to be enabled. The notice warns that these are "disabled by default" and need to be turned on in the settings of the menu of ChatGPT. 

The committee also recommended that staffers "read guidance from the House Digital Service Advisory Group" and "[n]o other version of ChatGPT or other large language models AI software are authorized."

Concerns over AI software have been cropping up on both sides of the aisle. 

In a May 16 Subcommittee Hearing, the CEO of ChatGPT, Samuel Altman, met with lawmakers about the regulation of artificial intelligence and the power that it has. 

In Altman's written testimony he stated that "OpenAI believes that regulation of AI is essential, and we’re eager to help policymakers as they determine how to facilitate regulation that balances incentivizing safety while ensuring that people are able to access the technology’s benefits."

"Disinformation" was also a concern of his written testimony. As Altman wrote, "Fighting disinformation takes a whole-of-society approach, and OpenAI has engaged with researchers and industry peers early on to understand how AI might be used to spread disinformation." 

The Brookings Institution reported on May 8 that the language model expressed a "pro-environmental, left-libertarian orientation" bias. 
Sign in to comment


Powered by StructureCMS™ Comments

Join and support independent free thinkers!

We’re independent and can’t be cancelled. The establishment media is increasingly dedicated to divisive cancel culture, corporate wokeism, and political correctness, all while covering up corruption from the corridors of power. The need for fact-based journalism and thoughtful analysis has never been greater. When you support The Post Millennial, you support freedom of the press at a time when it's under direct attack. Join the ranks of independent, free thinkers by supporting us today for as little as $1.

Support The Post Millennial

Remind me next month

To find out what personal data we collect and how we use it, please visit our Privacy Policy

By signing up you agree to our Terms of Use and Privacy Policy
© 2024 The Post Millennial, Privacy Policy | Do Not Sell My Personal Information