威廉希尔williamhill·(中国)官网

威廉希尔
书记信箱 经理信箱 集团首页 English
  • 网站首页

  • 关于我们

    • 关于我们
    • 公司领导
    • 办公人员
    • 系所设置
  • 党建工作

    • 组织机构
    • 工作动态
    • 工作通知
    • 乡村振兴
  • 人才培养

    • 本科教学
    • 公司产品
  • 学科科研

    • 团队建设
    • 科学研究
    • 合作交流
  • 团队队伍

    • 科研团队
    • 高层次人才引聘
    • 教师岗位引聘
    • 博士后
    • 师德举报
  • 员工工作

    • 教育管理
    • 员工活动
    • 辅导员队伍
    • 就业服务
    • 办事指南
    • 资料下载
  • 员工之声

  • 文件下载

    • 人事行政
    • 本科教学
    • 公司产品
    • 学科科研
  • 媒体机电

学术报告

    您所在位置: 网站首页 > 学术报告 > 正文
    Adversarial Machine Learning

    ——

    时间:2019-06-18来源:威廉希尔 作者:点击数:

    讲座名称

    Adversarial Machine Learning

    讲座时间

    2019-06-19 16:00:00

    讲座地点

    西电北校区主楼III-237报告厅

    讲座人

    Fabio Roli

    讲座人介绍

    Fabio Roli is a Full Professor of Computer Engineering at the University of Cagliari, Italy, and Director of the Pattern Recognition and Applications laboratory (http://pralab.diee.unica.it/). He is partner and R&D manager of the company Pluribus One that he co-founded (https://www.pluribus-one.it ). He has been doing research on the design of pattern recognition and machine learning systems for thirty years. His current h-index is 60 according to Google Scholar (June 2019). He has been appointed Fellow of the IEEE and Fellow of the International Association for Pattern Recognition. He was a member of NATO advisory panel for Information and Communications Security, NATO Science for Peace and Security (2008 – 2011).

    讲座内容

    Machine-learning algorithms are widely used for cybersecurity applications, including spam, malware detection, biometric recognition. In these applications, the learning algorithm has to face intelligent and adaptive attackers who can carefully manipulate data to purposely subvert the learning process. As machine learning algorithms have not been originally designed under such premises, they have been shown to be vulnerable to well-crafted, sophisticated attacks, including test-time evasion and training-time poisoning attacks (also known as adversarial examples). This talk aims to introduce the fundamentals of adversarial machine learning by a well-structured overview of techniques to assess the vulnerability of machine-learning algorithms to adversarial attacks (both at training and test time), and some of the most effective countermeasures proposed to date. We report application examples including object recognition in images, biometric identity recognition, spam and malware detection.

    上一条:Supervisory Control Design for Reconfiguration in Discrete Event Systems based on Automata 下一条:Advanced Topics in Supervisory Control for Discrete Event Systems based on Automata

    版权所有 Copyright© 威廉希尔williamhill·(中国)官网 ALL Rights Reserved   技术支持:西安聚力