Ask HN: Privacy concerns when using AI assistants for coding?

6 points by Kholin 21 hours ago

I've recently seen some teams claim to use third-party AI assistants like Claude or ChatGPT for coding. Don't they consider it a problem to feed their proprietary commercial code into the services of these third-party companies?

If you feed the most critical parts of your project to an AI, wouldn't that introduce security vulnerabilities? The AI would then have an in-depth understanding of your project's core architecture. Consequently, couldn't other AI users potentially gain easy access to these underlying details and breach your security defenses?

Furthermore, couldn't other users then easily copy your code without any attribution, making it seem no different from open-source software?

ATechGuy 2 hours ago

I believe enterprises that care about privacy are using private AI from big tech (say Github copilot), others may not care so much about it.

apothegm 21 hours ago

In theory, these companies all claim they don’t use data from API calls for training. Whether or not they adhere to that is… TBD, I guess.

So far I’ve decided to trust Anthropic and OpenAI with my code, but not Deepseek, for instance.

baobun 21 hours ago

Especially under current US administration and geopolitical climate?

Yeah, we're not doing that.

Also moved our private git repos and CIs to self-managed.

bhaney 20 hours ago

> The AI would then have an in-depth understanding of your project's core architecture

God how I wish this were true

rvz 21 hours ago

Don't forget that your env API keys are getting read and sent to Cursor, Anthropic, OpenAI and Gemini as well.