Campus Community,
Please attend the summer IT Cybersecurity Seminar Thursday, June 21st. The seminar will take place from 9:00-11:00 at SS&MS 2nd Floor Conference Room 2135. A zoom session will be available at If you are unable to join us in person at https://zoom.us/j/728-549-7619
I will be presenting highlights from the system-wide Security Summit held in April at UC Santa Cruz. Ashwin Mathew from the Berkeley Center for Long-term Cybersecurity will be presenting work associated with his research. Lukas Dresel, a Ph.D. student at UCSB’s SecLab will present a talk related to his research. Finally, I’ll be answering your questions and providing some information about another seminar later this year.
Matt Hall recently polled IT staff across campus. Nearly 50% of those responding expressed an interest in knowing more about cybersecurity. This is an opportunity for you. You can find out more about Ashwin’s research in an article he wrote for Educause at: https://er.educause.edu/blogs/
Finding bugs in software is an important task in a world where software permeates almost every aspect of our daily lives. This is especially true for software for which source code is not available, which is commonplace due to legacy code that can no longer be compiled and because many authors simply do not make their source code available.
Automatically finding bugs and vulnerabilities in binaries has become more capable in the last decade, with tools like fuzzers and symbolic execution engines becoming more widely available and powerful. However, much of that improvement has been in analyzing software that communicates with computers, like file format parsers and network protocols. It turns out that there are a number of differences in software that is designed to be used by humans, like games, command-line tools, interpreters, etc., which make these types of applications harder to automatically analyze. In this talk, I will go over how we built a tool that uses state-of-the-art binary analysis and bug finding techniques, recognizes when it gets stuck, and then leverages untrained human workers to advance its analysis by having them interact with the software under test while recording the interactions. Using Amazon's Mechanical Turk we saw our human-assisted prototype improve the number of bugs found from 36 to 56 out of 95 programs designed to be used by humans.
Best,
Sam Horowitz,
Chief Information Security Officer
UC Santa Barbara