To what extent are you, as an editor, worried or excited or both about how chatGPT (or similar program) could affect the dissemination of our scientific results?
"Write this report in the style of [your academic rival]"
Supercharged fraud...
LLMs are guilty of gargantuan-scale copyright infringement...
...but AI-generated stuff can't be copyrighted
Théâtre d’Opéra Spatial, Jason Allen (using Midjourney)
Lack of clarity/consistency of editorial policies
Should ChatGPT be a coauthor? (plagiarism?)
Difficulty of detecting AI content using traditional plagiarism tools
Lack of honesty (ouch!) on AI use acknowledgements
Laziness empowered: "just let the machine do it"
Safeguard (?!): "Humans should always do a check"
Have no illusions: this won't happen, humans like to cut it short and systems tend to reward bullshit
Long-term fear: truth drowning, bogus research much more voluminous than correct one
Huge opportunities for much-improved, flexible and highly-optimizable apps
Super simple example: IArxiv
Challenge for publishers: introduce next-generation digesters, search
Machine being able to assess research works
Definition of "original":
Go: machine is capable of "original" thought
Often read/heard:
AI can only rehash existing stuff, not invent new
I fundamentally disagree
Nature, inanimate matter is creative: it invented us
AI is clearly a step above inanimate matter
All the current commotion is not really
about Machine Learning
In the end it's all about human learning,
what future it has,
and whether we give up, adapt or (need to) fight