• MTK@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    21 hours ago

    Yeah, I’m aware of AI safety research and the problem with setting a goal that at the end can be solved in a way that harms us and the AI doesn’t care because safety wasn’t part of the goal. But that is only applied if we introduce a goal that has a solution that includes hurting us.

    I’m not saying that AI will definitely never have any way of harming us but there is this really big idea that is very popular that AI once it gains intelligence will immediately try to kill us which is baseless.

    • Ludrol@szmer.info
      link
      fedilink
      arrow-up
      1
      ·
      20 hours ago

      But that is only applied if we introduce a goal that has a solution that includes hurting us.

      I would like to disagree in pharsing of this. The AI will not hurt as if and only if the goal contains a clause to not hurt us.

      You are implying that there exist significant set of solutions that don’t contain hurting us. I don’t know any evidence supporting your claim. Most solutions to any goal would involve hurting humans.

      By deafult stamp collector machine will kill humanity, as humans sometimes destroy stamps. And stamp collector need to optimize amount of stamps in the world.

      • MTK@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        19 hours ago

        I think that if you run some scenarios you can logically conclude that most tasks don’t make sense for an AI to harm us, even if it is a possibility. You need to also take vost into account. Bit I think we can agree to disagree :)