My key takeaways
- the fragmentation of the media landscape made deepfakes relevant
- deepfakes still need a lot of training material
- no data, no deep fake
- voice is still an issue (delay, monotonie), but AI technology is developing fast
- attackers may just find someone with a similar voice then trying to generate the voice artificially
- the BSI has an eye on the development as well since ~1/2 year
- "I swear I’m not a cat" 😀
- most awareness trainings focus on the tools of the attackers, missing the strategy behind it
- the effect of a deepfake depends on the context aka strategy
- psycholocial associations help attacker to not need to fake every detail
- WYSIATI – "what you see is all there is"
- if we concentrate on details and are sceptic, we still have 94% probability to detect deep fakes
- mulitmedia forensic exists since more then 20 years, eg statistical differences in images
- FB challenge 2020: active image or video manipulation eg in the context of an attack and the automated detection rate drops to ~0%
- fakeporn is a different technology then real time face reenactment
- every silver bullet, like different lighting effects in both eyes of a person, will be attacked by a new technology: a cat and mouse game
- manual detection is not a scaleable option
- alternatives: detect the original video source or PKI for media ressources
- driver of technological development: media companies for entertainment production
- the dark side will adopt these developments
- top details to spot deep fakes today
- movements
- voice
- teeth
- length variations of the neck
- eyes (reflections, colors)
Env
- Provided by
- Moderator:
- Presenter: