Skip to main content

Homepage

The Mathematics Colloquium in the Department of Mathematics & Computer Science at Rutgers-Newark takes place on Wednesdays 4-5pm, either in person at 204 Smith Hall, 101 Warren St., or via Zoom. All are welcome!

For more information or to be added to our mailing list, please email Kyle Hayden (kyle.hayden@rutgers.edu).

Schedule — Spring 2026

Date Speaker Title
Jan 28 Shira Wein (Amherst) Lost in Translation, and Found: Detecting and Interpreting Translation Effects in Large Language Models
Feb 4 Isaiah King (GWU) Cyber Threat Hunting with Graph Deep Learning
Feb 11 Bikash Kanungo Learning Physics Across Scales: From Quantum Many-Body Theory to Atomistic Models
Feb 16 (*Monday 4pm*) Soudeh Ghorbani (Johns Hopkins) Unblocking AI: Understanding and Overcoming Datacenter Network Bottlenecks in Distributed AI
Feb 18 Geyang Dai (Nat. U. Singapore) Elliptic Chern Characters and Elliptic Atiyah–Witten Formula
Feb 25 No colloquium *postponed due to weather*
Mar 2* Erin Chambers (Notre Dame) Braids and monodromy in topological data analysis (*part of the Distinguished Lectures in Topology, Geometry, and Physics)
Mar 4 Dave Auckly (Kansas State University) Smoothly knotted surfaces in small closed 4-manifolds
March 11 No colloquium [postponed]
March 18 No colloquium Spring break
March 25 Heidi Goodson (CUNY) TBD
April 1 Hannah Turner (Stockton University) TBD
April 8 Sahana H. Balasubramanya (Lafayette) TBD
April 15
April 22
April 29 Tamás Darvas (Maryland) A YTD correspondence for constant scalar curvature metrics
May 6

January 28
Shira Wein (Amherst)
Lost in Translation, and Found: Detecting and Interpreting Translation Effects in Large Language Models

Large language models are able to generate highly fluent text, in large part because they are trained on massive amounts of data. This data may contain “translationese”: hallmarks which distinguish translated texts from texts originating in the language. Though individual translated texts are often fluent and preserve meaning, at a large scale, the presence of translated texts in training data negatively impacts performance and in test data inflates evaluation. In this work, I investigate (1) whether humans are able to distinguish texts originally written in English from texts translated into English, (2) how the surface-level features of translationese can be mitigated using Abstract Meaning Representation, and (3) why neural classifiers are able to distinguish original and translated English texts much more accurately than humans.

 

February 4
Isaiah King (GWU)
Cyber Threat Hunting with Graph Deep Learning

Modern computer networks generate massive volumes of high-dimensional, time-evolving data, making the detection and response to cyber-attacks increasingly challenging. This challenge is especially acute for novel or zero-day attacks, where predefined signatures or heuristics are ineffective. This talk presents a framework for modeling large-scale computer networks as temporal graphs, enabling scalable, precise, and generalizable approaches to intrusion detection and incident response. Using deep graph deep learning techniques, including temporal link prediction and graph representation learning, anomalous activity can be identified in complex network environments. If an attacker is detected on the network, the same graph-based abstraction supports decision-making for active defense. Framing network defense as a multi-agent Markov game, graph-based reinforcement learning can be used to reason about adversarial behavior and select actions that contain and remove attackers while minimizing disruption to normal operations. This work lies at the intersection of cybersecurity and data science, advancing the state of the art in attack detection, attribution, and automated response to sophisticated adversaries operating in real-world networked systems.

 

February 11
Bikash Kanungo (Michigan)
Learning Physics Across Scales: From Quantum Many-Body Theory to Atomistic Models

Density functional theory (DFT) and atomistics have long remained the backbone of computational chemistry and materials science. DFT, being an electronic structure method, provides a quantum-mechanical description of interacting electrons. Atomistic methods, on the other hand, remove the electronic degrees of freedom to simulate the dynamics of atoms at length- and time-scales beyond DFT’s reach. Together, these approaches account for 40% of global scientific computing resources. Despite their success, both suffer from long-standing challenges in accuracy, transferability, scalability, and multiscale consistency. In DFT, the unknown exchange-correlation (XC) functional defines a nonlinear, nonlocal map from electron density to the energy. Current approximations to it remain far from chemical accuracy. Plus, the high computational demands of DFT limit their routine usage to a few thousand atoms. In atomistic modeling, interatomic potentials (IPs), be it classical or machine-learned, are typically fit to narrow datasets, limiting their transferability. More importantly, the lack of electronic information in IPs limits their ability to describe electronically driven phenomena, such as in surface chemistry, emergent behavior in quantum materials, and biochemical reactions.

In this talk, I will present various approaches to address these fundamental challenges in DFT and atomistics. First, I will show how accurate quantum many-body data can be used to machine-learn the XC functionals by solving the inverse DFT problem, yielding systematically improvable and physically constrained models. Second, I will introduce a new approach, termed field-theoretic atomistics (FTA), as a large-scale machine-learned surrogate for DFT that adheres to the known physical symmetries and variational principles. Unlike IPs, FTA retains electronic degrees of freedom at similar computational costs as current machine-learned IPs. Finally, I will also discuss how, combined with fast and scalable numerical methods, this approach of integrating machine learning with physical principles can overcome long-standing accuracy and scale barriers in quantum mechanical modeling of materials.

 

February 16 (Monday)
Soudeh Ghorbani (Johns Hopkins)
Unblocking AI: Understanding and Overcoming Datacenter Network Bottlenecks in Distributed AI

As companies continue to invest heavily in AI-dedicated datacenters, a critical yet often underestimated challenge persists: datacenter networks remain a major bottleneck in distributed AI training. Despite rapid advances in compute hardware and machine learning algorithms, network congestion and communication overhead still limit the scalability and efficiency of large-scale AI workloads.

In this talk, I will present insights from a comprehensive study I led, where our team instrumented and analyzed traffic across 20+ AI datacenters of a major hyperscaler. Our investigation revealed key characteristics of AI workloads, the root causes of persistent network bottlenecks, and the challenges that arise when attempting to mitigate them. Building on these findings, I will introduce new datacenter network designs that challenge long-standing paradigms, such as strict shortest-path routing and in-order packet delivery, by embracing more flexible, robust strategies. I will show how these approaches pinpoint and alleviate bottlenecks, yielding substantial performance improvements. I will conclude with open research questions and future directions in optimizing networks for AI at scale.

 

February 18
Geyang Dai (National University of Singapore)
Elliptic Chern Characters and Elliptic Atiyah–Witten Formula

A principal G-bundle over a manifold X, equipped with a connection, together with a positive-energy representation, gives rise to a circle-equivariant gerbe module on the free loop space LX. From this data we construct an elliptic Chern character on LX, and a refinement, the elliptic Bismut–Chern character on the double loop space.

We also generalize the Atiyah–Witten formula to double loop space. We show that the four Pfaffian sections, corresponding to the four spin structures on an elliptic curve, are identified with the four elliptic holonomies arising from the four virtual level one positive-energy representations when G=Spin. These constructions are closely related to conformal blocks in Chern–Simons gauge theory.

 

March 2
Erin Chambers (University of Notre Dame)
Braids and monodromy in topological data analysis

In this talk, we will discuss recent work that connects two fundamental but generally distinct areas in computational topology, namely topological data analysis (TDA) and knot theory. Given a function from a topological space to $\mathbb{R}$, TDA provides tools to simplify and study the importance of topological features: in particular, the $l^{th}$-dimensional persistence diagram encodes the topological changes (or $l$-homology) in the sublevel set as the function value increases into a set of points in the plane. Given a continuous one-parameter family of such functions, we can combine the persistence diagrams into an object known as a vineyard, which tracks the evolution of points in the persistence diagram as the function changes. If we further restrict that family of functions to be periodic, we identify the two ends of the vineyard, yielding a closed vineyard. This allows the study of knots and braids in vineyards, as well as introducing the concept of monodromy to TDA, which in this context means that following the family of functions for a period permutes the set of points in a non-trivial way. Interestingly, the presence of monodromy in a vineyard also connects in a fundamental way to singularity theory, namely the medial axis and symmetry set of the shape.

After reviewing the basic constructions of persistence diagrams and vineyards, we will show a construction which generates arbitrary knots and monodromy in a vineyard, and (time allowing) will classify why this behavior arises from specific types of singularities in the symmetry set.

 

March 4
Dave Auckly (Kansas State University)

Smoothly knotted surfaces in small closed 4-manifolds

It has long been known that homeomorphic 4-manifolds may admit inequivalent smooth structures. Analogous behavior holds for embedded surfaces. Some surfaces are topologically isotopic without being smoothly isotopic. Such surfaces are said to be smoothly knotted. It turns out that it is easier to construct inequivalent smooth structures on larger 4-manifolds. Similarly, it is easier to construct closed, smoothly knotted surfaces in large 4-manifolds. In this talk, we will explain how to construct smoothly knotted surfaces in a small 4-manifold. This talk will have many pictures. This is joint work with Konno, Mukherjee, Ruberman, and Taniguchi.

 

April 29
Tamás Darvas (University of Maryland)
A YTD correspondence for constant scalar curvature metrics

Given a compact Kähler manifold, to better understand Mabuchi’s  K energy we introduce a family of  K^beta energies, whose favorable properties are similar to those of the Ding energy from the Fano case. The construction uses Berman’s transcendental quantization, and we show that the slope of the K^beta energies along test configurations can be computed using intersection theory. With these ingredients in place we provide a uniform Yau-Tian-Donaldson correspondence that characterizes the existence of a unique constant scalar curvature Kähler metric using test configurations. Combining our techniques with the non-Archimedean approach to K-stability pioneered by Boucksom-Jonsson, we show that the properness of the classical  energy can be tested by checking its slope along a distinguished subclass of Li-type models, called log discrepancy models, thus yielding another G-uniform Yau–Tian–Donaldson correspondence. (Joint with Kewei Zhang)