CV
For more details download the PDF file.
Table of contents
- General Information
- Education
- Skills
- Research Experience
- Honors and Awards
- Reseach Interests
- Languages
General Information
Full Name | Alireza Dehghanpour Farashah |
Date of Birth | 6th January 2001 |
Languages | English, Persian |
Education
-
2019 Bachelor
Sharif University of Technology , Tehran, Iran - Computer Engineering
- GPA, 19.48/20 (Top 10%)
- last two years average grade, 19.66/20
-
2016 High School
Allameh Helli, National Organization for Development of Exceptional Talents, Tehran, Iran - Mathematics and Physics
- General Grade Average (GPA), 4/4
Skills
-
Programming Languages
-
Python
-
C++
-
Java
-
-
Frameworks & Libraries
-
Pytorch
-
TensorFlow
-
Keras
-
Scikit-learn
-
Numpy
-
Pandas
-
Openmim
-
MMDetection
-
MMPose
-
-
Tools
-
Git
-
PostgreSQL
-
MySQL
-
-
Operating Systems
-
Linux
-
Windows
-
-
CAD & Simulation Tools
-
Proteus
-
ModelSim
-
Packet Tracer
-
Arena
-
Wireshark
-
Arduino
-
Research Experience
-
Jul 2023 - Sep 2023 Summer Internship CUHK
Chinese University of Hong Kong (CUHK) - Supervised by Prof. Farzan Farnia
- My research experience under the supervision of Prof. Farnia at the Chinese University of Hong Kong proved to be highly beneficial, deepening my understanding of adversarial attacks and perturbations. Prof. Farnia’s guidance exposed me to cutting-edge research and techniques, expanding my knowledge and providing me valuable insights. Through this project, I gained practical skills in developing low-rank adversarial attacks. This research project enhanced my theoretical knowledge, equipped me with practical skills. In this project we designed a new attack by projecting perturbations into low dimentional space. This project leads to advances in transferability of adversarial perturbations.
- Supervised by Prof. Farzan Farnia
-
Oct 2022 - Present Research Assistant at Robust and Interpretable Machine Learning Lab
Sharif University of Technology - Supervised by Prof. Mohammad Hossein Rohban
- In this project, we worked on Fast Adversarial Training which suffers from Catastrophic Overfitting. CO occurs when the adversarially trained network suddenly loses robustness against multi-step attacks like Projected Gradient Descent (PGD). Although several approaches have been proposed to address this problem in Convolutional Neural Networks (CNNs), we found out that they do not perform well when applied to Vision Transformers (ViTs). In our paper we proposed a novel method of fast training which overcomes Catastrophic Overfitting in ViTs.
-
Oct 2020 - Nov 2021 Research Assistant at Data Analytics Lab
Sharif University of Technology - Supervised by Prof. Seyed Abolfazl Motahari
- During my research experience under the supervision of Prof. Motahari, I became familiar with various topics and techniques in the field of natural language processing and speech recognition like Language model pre-training (LLMs), Knowledge distillation, Contrastive representation learning, Self-supervised learning. In our project, we implemented a speech-to-text model based on Wav2Vec 2.0, which is a framework for self-supervised learning of speech representations. Wav2Vec 2.0 consists of two stages a pre-training stage and a fine-tuning stage. In the pre-training stage, the model learns to encode the raw waveform of speech into latent representations using a contrastive loss. In the fine-tuning stage, the model learns to map the latent representations to the corresponding text using a connectionist temporal classification (CTC) loss.
Honors and Awards
-
2019 - 33rd in the Nationwide University Entrance Exam among over 160,000
Reseach Interests
- Robustness
- Adversarial Attacks
- Interpretability
- Differential Privacy
- Fairness
- Domain Adaptation/Generalization
- Biomedical Image Analysis
- Object Detection
- Pose Estimation
- Pose Tracking
Languages
English | Proficient - TOEFL iBT 105 (R26, L30, S24, W25) |
Persian | Native |