Main Page

About Me

Emile Timothy Anand

About Me, July 5, 2022
Picture of me at Annenberg
"Looking outside Annenberg" by Kim Fox Photography

Hello! My name is Emile Timothy, and I'm a senior at Caltech majoring in Computer Science. I do research at the Theory of Computation Group and the Rigorous Systems Research Group (RSRG), and am a member of the Caltech Data Science team. My interests include working on research or coding projects, playing the Piano, thinking about wild hypothetical scenarios, and exploring boba stores and running trails around Pasadena and Los Angeles as a member of the Caltech XC/TF team. I occasionally post pictures and share my thoughts on Instagram and Twitter.

My research with Chris Umans focuses on constructing a framework to study the model of expander random walks, which is a useful tool in derandomization. My research is devoted to enlarging the pseudorandomness of expander random walks by finding test-functions that are unable to distinguish between expander random walks and truly random walks. We are currently trying to show that all functions computed by \(\mathrm{ACC}^0\) circuits are fooled by expander random walks. My research with Adam Wierman focuses on using perturbation analysis to prove stability bounds for model-predictive control in noisy systems, and deriving linear feedback policies for the regularized online balanced descent (R-OBD) algorithm. Our work is currently awaiting review at \(\mathrm{SIGMETRICS}\) \(2023\). My research with Yaser Abu-Mostafa focuses on explaining why neural-network multi-classifiers outperform binary classifiers. To explain this, we are developing a theory of mutual cross-regularization between classes to prove a higher learning bound for multi-classifiers and training neural networks on large datasets (MNIST) to find the scope of this theory.


Current Research Endeavors

Learn About My Research, July 5, 2022
Picture of me at Moore Walk
"Coding on Moore Walk" by Kim Fox Photography

In High School, I represented the UAE in the International Physics Olympiad where I specialized in the thermal and statistical physics section - it was a defining moment for me: the study of stochasticity and noise in the context of physics seemed like something I would enjoy working on.

However, at Caltech, the study of randomness as studied in Computer Science allured me. There are several open-problems in computational complexity, a field that studies classes of problems and how they relate to each other in terms of computational resources, one of which is whether \(\mathrm{BPP} = \mathrm{P}\). \(\mathrm{BPP}\) is the class of "bounded-error probabilistic time", which means that a probabilistic algorithm can solve any problem in \(\mathrm{BPP}\) with arbitrarily high precision in a polynomial amount of time. Conversely, \(\mathrm P\) is the class of "polynomial time", which means that any algorithm can solve a problem in \(\mathrm P\) in a polynomial amount of time. As of today, the question of whether \(\mathrm{BPP} = \mathrm{P}\) is an open problem, but the implications of this equality would be enormous and far-reaching. Most notably, it would imply that randomness can be simulated. True randomness is an expensive resource, and we can barely get truly random bits by measuring fluctuations in background radiation and air temperature, though samples from these distribution can be (catastrophically) correlated. Therefore, in cryptography and computational physics, computers today instead use pseudorandom bits which are seeded from a fewer number of truly random bits but can be distinguished by large ad-hoc statistical tests and large circuits. It would be revolutionary if algorithms could instead generate truly random bits without any seed, since it would imply 1) that \(\mathrm{BPP} = \mathrm{P}\), and 2) that randomness is not a special resource since we can create randomness in polynomial time. My research on expander graphs stems from Szemerèdi's (1987) theorem that a short random walk on an expander graph can efficiently simulate a uniformly random sampling of its vertices with an exponentially high probability, since it decreases the number of random bits, \(\mathrm k\), needed to generate \(n\) random bits from \(O(k \log n)\) to \(\log(n) + O(k)\): it's an absurdly useful theorem for the derandomization of probabilistic algorithms. The problem is that we don't really have a consistent way to analyze expander graphs, given their ubiquitiousness (error-correcting codes, probabilistically-checkable proofs, derandomization) and complexity at the asymptotic limit (when it has a large number of vertices). Therefore, I'm studying the expander random walk to see if it tells us anything about random walks on expander graphs as the bias goes to \(0\). I'm hopeful that it might reveal something special about the underlying nature of randomness.

Randomness, in the form of stochasticity, also rears itself quite unpredictably in machine learning. In the study of dimensionality reduction embeddings, we can reduce transform any \(\mathrm p\)-dimensional dataset to a \(\mathrm q\)-dimensional dataset to actually visualize the data to understand how clusters form by comparing the similarity of points. The most common techniques for this are \(\mathrm t\)-\(\mathrm{SNE}\) (t-stochastic neighbor embedding) and \(\mathrm{UMAP}\) (uniform manifold approximate projection), which uses the Kullback-Leibler divergence and cross entropy loss functions, respectively. When performing dimensionality reduction on noisy real-world data, my research group observed that the noise is not uniformly propagated down to lower dimensions, suggesting that noise (even truly random noise) contains some information, which has interesting implications in information theory. My affinity for understanding noise better has also made me curious about how noise affects machine-learning models. My work in reinforcement learning seeks to understand how model predictive control with predictive errors can learn information in the presence of noise, and whether it is possible to bound a stable trajectory during the learning process. I'm hopeful that my work in understanding how noise shapes the trajectories of dynamical systems can lead to creating more stable processes and systems, and will contribute meaningful change in the efforts to create a sustainable world.

What Motivates Me?

Why I Do What I Do, July 5, 2022
Commencement picture of the Caltech class of 2023
Caltech Class of '23 Commencement Picture

I grew up in Dubai as the child of immigrant parents from India who left their families and social-circles and worked hard to make a living. Given the different culture of the Middle-East, it wasn’t easy for them to adapt. My Mom worked as a Chemistry Teacher, managing extraordinary shifts while carrying me, and my Dad balanced job instability. After she gave birth, he traveled 100 kilometers every day just to see me. Today, my Dad teaches next-generation technology to a wide base of change-makers as a Telecom Instructor, and my Mom inspires the lives of thousands of students as a Principal. I have an indelible impression from my parents, my superheros, of what it takes to strive towards success: the collective power of drive and resolve. I take the road less traveled, the wicket gate, to make an impact, knowing that it takes commitment to push the frontiers of Computer Science: create ideas, prove theories, break barriers, and serve humanity. From all of this, I've learnt the importance of humility. Richard Feynman once said that "the more you know, the more you know what you don't know". I'm truly reminded every day of just how little I really know and how many global problems there are out there that are just waiting to be solved. Yet, my thirst for problem solving and my hunger to understand the unknown is letting me rise to this challenge.

Poster Picture
"Facing West" by Christopher Zhou

Emile Timothy Anand

An explorer of the unknown. I'm interested in creating robust machine learning algorithms and pushing the frontiers of computational limits, especially those concerning pseudorandomness.


Recent Books

  • Heard Heard On The Street
    Timothy Falcon
  • fooled_by_randomness Fooled By Randomness
    Nassim Taleb
  • bed_of_procrustes The Bed of Procrustes
    Nassim Taleb
  • Heard Surely You're Joking Mr. Feynman:
    Richard Feynman

Favorite Music

  • Spring Day Spring Day
    BTS
  • Danse Macabre Danse Macabre
    Saint-Saëns (Liszt's piano rendition)
  • Viva La Vida Viva La Vida
    Coldplay
  • dynamite Dynamite
    BTS

Recent Projects


Tags

Complexity Theory Machine Learning Derandomization Convex Optimization Control Theory Expanders PRGs



Find Me Online On

  • linkedin LinkedIn
    @emiletimothy
  • instagram Instagram
    @emiletimothy
  • twitter Twitter
    @emile217
  • github
    Github
    @emiletimothy
  • facebook Facebook
    @FlashPointMultiverse