As it becomes increasingly difficult to tell the difference between human and machine generated content, we must place safeguards and tests around AI to make sure it is trustworthy and responsible.
Room F1+F2+F3 - Plenary
General Audience, applicable to all roles.
It’s been over 70 years since Alan Turing defined what many still consider to be the ultimate test for a computer system — Can a machine exhibit intelligent behavior that is indistinguishable from that of a human? Originally coined the imitation game, the Turing test involves having someone evaluate text conversations between a human and a machine designed to respond like a human. The machine passes the test if the evaluator cannot reliably tell the difference between the human versus machine-generated text. Although the Turing test generally serves as a starting point for discussing AI advances, some question its validity as a test of intelligence.
After all, the results do not require the machine to be correct, only for its answers to resemble those of a human. Whether it's due to artificial "intelligence" or imitation, we live in an age where machines are capable of generating convincingly realistic content. Generative AI does more than answer questions, it writes articles and poetry, synthesizes human faces and voices, creates music and artwork, and even develops and tests software. But what are the implications of these machine-based imitation games? Are they a glimpse into a future where AI reaches general or super intelligence? Or is it simply a matter of revisiting or redefining the Turing test?
Join Tariq King as he leverages a live audience of software testing professionals to probe everything from generative adversarial networks (GANs) to generative pre-trained transformers (GPT). Let’s critically examine the Turing test and more because it’s judgment day — and this time, we are the judges!
Bonus Session
30-minute Vendor Talk
25-minute Talk
30-minute Vendor Talk