Citiverse
  • I wish that those surveys so often cited by InfoSec pundits that ask

    Senza categoria
    6 8 0

    filippo@abyssdomain.expertF
    14
    0

    I wish that those surveys so often cited by InfoSec pundits that ask

    Do you fully trust AI output?
    Do you always verify AI output?

    also asked

    Do you fully trust your colleagues' output?
    Do you always verify your colleagues' output?

    Just to have comparative numbers, you know.

  • filippo@abyssdomain.expertF
    14
    0

    One could go on!

    Do you fully trust third-party dependencies?
    Do you always verify third-party dependencies?

    But somehow AI output is special and harbinger of all security issues.

  • aura@gts.foxsnuggl.esA
    1
    0

    @filippo the failure mode of LLM outputs is very different and harder to reason about, but ironically more often wrong in such an obvious way that it's easy to anthropomorphize the LLM and make false assumptions about future failure modes.

    reviewing LLM output definitely requires a different type of vigilance.

  • sirikon@mastodon.socialS
    1
    0

    @filippo A colleague is responsible for the output even when I'm the reviewer, AI is not.

    A colleague is expected to learn from its mistakes and grow in responsibilities, AI only improves if the big tech firm decides to retrain.

    Colleagues are very different from each other and each one has their own flaws and strengths when you try to trick them into doing something. There are like 5 AIs sharing 90% of the work and can be tricked by asking them to write a haiku.

  • tymwol@hachyderm.ioT
    1
    0

    @filippo Agree in terms of numbers, but also I don't think it is the same. People have incentives not to write bad code (you don't want to look dumb, don't want to loose your job, have some internal motivation for doing good job, etc), while AI has no such incentives. And no, prompting it for it is not the same thing. Moreover, people reason, while AI does not, so unless someone just copies and pastes code from StackOverflow, they would put at least minimal amount of thought about their work, while AI can produce code that does not even compile or is bluntly wrong in other ways.

  • josephlord@union.placeJ
    1
    0

    @filippo

    No. I don’t trust third-party dependencies in general.

    Define verify? The extent of verification depends in trust in third party and the project I am making dependent (risk profile, expected life etc.).

    Red flags would include things like too many onwards dependencies, dependencies that I consider to be privacy risks etc.

    On the other hand I have high trust in things like official Swift project packages, SQLite and substantial trust in things like the Vapor project.

  • dalias@hachyderm.ioD
    2
    0

    @filippo That's missing the point. Your colleagues understand there are consequences to fucking up, avoid doing it, and work to make things right if they do. The slop extruder just digs in and feeds you more slop.

  • cybersecurity@poliverso.orgC cybersecurity@poliverso.org ha condiviso questa discussione

Citiverse è un progetto che si basa su NodeBB ed è federato! | Categorie federate | Chat | 📱 Installa web app o APK | 🧡 Donazioni | Privacy Policy

Il server utilizzato è quello di Webdock, in Danimarca. Se volete provarlo potete ottenere il 20% di sconto con questo link e noi riceveremo un aiuto sotto forma di credito da usare proprio per mantenere Citiverse.