• 10 Posts
  • 179 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle


  • FantasticFox@lemmy.worldtoTechnology@lemmy.worldThoughts?
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    The biggest risks I see with AI are making misinfomation, scams etc. a lot easier.

    I remember as a kid you knew that people could just make stuff up, but a photograph was fairly reliable. Then along came Photoshop and it was trivial to make convincing fake photographs.

    AI is able to do this with audio, soon with full video (perhaps already?) - so then it becomes much harder to trust anything.








  • I find it slightly sad that when our leaders talk of Technology and Innovation - they often mean these ‘tech’ companies that essentially work out how to better sell advertising and occasionally provide a useful service alongside this.

    Where is the Bell Labs? The Skunk Works?

    We have incredible problems facing us such as Climate Change and decarbonisation seems like it will be a very difficult challenge. And yet we focus on banal “innovation” in frivolous things.






  • Yeah, but some things cost a lot of money to develop. The higher the cost of the R&D, the less likely it is to occur without some patent system. Although I agree that in programming specifically the Open Source model seems to work quite well - look at the Apache Foundation.

    You could have a model where all research was done by a public body or something like the Apache Foundation, but this reduces innovation as it means there is less opportunity for some people to try something that may not be considered likely to be successful, as publicly funded research tends to focus on the safest path. For an example, look at how public nuclear fusion research is continuing on the traditional toroidal tokamak model with ITER compared with the more experimental designs being tested by private companies such as Helion, Focus Fusion, Tokamak Energy (they are using a high aspect-ratio 'spherical tokamak).



  • Yeah, I think with software due to the low barrier to entry etc. it makes sense for it to be further towards the less protections end of the spectrum.

    But still, if you’d paid a load of PhDs to come up with some really clever algorithm (think of like how Shazam had it’s music recognition algorithm long, long before modern ML) and then someone could just steal it well, it’d harm innovation and ultimately the tech industry and investment would go elsewhere and those clever PhD grads just wouldn’t find employment.

    It’s a balance that depends on the properties of each industry, but I don’t think that no protections whatsoever is ever a good answer.




  • But it’s trivial to write a slightly different implementation of something.

    I think one really has to consider what the effect on innovation will be - you don’t want too many protections as that will stifle innovation as it prevents people from building upon the prior ideas, but equally you don’t want no protection at all as that will discourage innovation as R&D takes money, so if you can’t recoup the investment the money simply won’t get invested into R&D and the innovation won’t happen.