My primary research interest is in deep learning for science. In particular, I'm interested in figuring out how neural networks learn and store information so that they are useful for scientific discovery.
Some general questions I think about are: 1) when can domain knowledge allow us to impose an inductive bias on otherwise untethered neural networks, and how can we impose them most effectively? 2) where are current AI methods insufficient for extracting scientific knowledge that has been implicitly learned by neural networks, and how can we address this?
Tseng, A.M., Eraslan, G., Diamant, N., Biancalani, T., Scalia, G. A mechanistically interpretable neural-network architecture for discovery of regulatory genomics. Machine Learning for Genomics Explorations (2024). [Link]
» Spotlight
Tseng, A.M., Shen, M., Biancalani, T., Scalia, G. Hierarchically branched diffusion models leverage dataset structure for class-conditional generation. TMLR (2024). [Link]
» Spotlight at 2023 ICML Workshop on Computational Biology, and winner of Most Popular Paper Award
Tseng, A.M., Diamant, N., Biancalani, T., Scalia, G. Complex Preferences for Different Convergent Priors in Discrete Graph Diffusion. Sampling and Optimization in Discrete Space (2023). [Link]
Diamant, N., Tseng, A.M., Chuang, K., Biancalani, T., Scalia, G. Improving Graph Generation by Restricting Graph Bandwidth. ICML (2023). [Link]
Tseng, A.M., Diamant, N., Biancalani, T., Scalia, G. GraphGUIDE: interpretable and controllable conditional graph generation with discrete Bernoulli diffusion. Machine Learning for Drug Discovery (2023). [Link]
Tseng, A.M., Shrikumar, A., Kundaje, A. Fourier-transform-based attribution priors improve the interpretability and stability of deep learning models for genomics. NeurIPS (2020). [Link]
» Spotlight at 2020 ISMB, Machine Learning in Computational and Systems Biology track
Uehara, M., Zhao, Y., Black, K., Hajiramezanali, E., Scalia, G., Diamant, N., Tseng, A.M., Levine, S., Biancalani, T. Feedback Efficient Online Fine-Tuning of Diffusion Models. ICML (2024). [Link]
Uehara, M., Zhao, Y., Black, K., Hajiramezanali, E., Scalia, G., Diamant, N., Tseng, A.M., Biancalani, T., Levine, S. Fine-Tuning of Continuous-Time Diffusion Models as Entropy-Regularized Control. arXiv (2024). [Link]
Ghari, P.M., Tseng, A.M., Eraslan, G., Lopez, R., Biancalani, T., Scalia, G., Hajiramezanali, E. Generative Flow Networks Assisted Biological Sequence Editing. Generative AI and Biology (2023). [Link]
Hajiramezanali, E., Maleki, S., Tseng, A.M., BenTaieb, A., Scalia, G., Biancalani, T. On the Consistency of GNN Explainability Methods. XAI in Action (2023). [Link]
Shen, M.W., Hajiramezanali, E., Scalia, G., Tseng, A.M., Diamant, N., Biancalani, T., Loukas, A. Conditional Diffusion with Less Explicit Guidance via Model Predictive Control. arXiv (2022). [Link]
Ludwig, L., Lareau, C.A., Bao, E.L., Liu, N., Utsugisawa, T., Tseng, A.M., ..., Sankaran, V.G. A Congenital Anemia Reveals Distinct Targeting Mechanisms for Master Transcription Factor GATA1. Blood (2022). [Link]
Heavner, W.E., Ji, S., Notwell, J.H., Dyer, E.S., Tseng, A.M., Birgmeier, J.B., Yoo, B., Bejerano, G., McConnell, S.K. Transcription factor expression defines subclasses of developing projection neurons highly similar to single-cell RNA-seq subtypes. Proceedings of the National Academy of Sciences (2020). [Link]
Jones, E.M., Lubock, N.B., Venkatakrishnan, A., Wang, J., Tseng, A.M., Paggi, J.M., Latorraca, N.R., Cancilla, D., Satyadi, M., Davis, J.E., Babu, M.M., Dror, R.O., Kosuri, S. Structural and Functional Characterization of G Protein-Coupled Receptors with Deep Mutational Scanning. eLife (2020). [Link]
Tseng, A.M., Adricula, N., Lam, D.P. (May 2014). Teasing Apart Self-Explanations: How the Types of Utterances Generated While Self-Explaining May Impact Learning from Text. Poster presented at Berkeley Interdisciplinary Research Conference, Berkeley, CA.