Próximos seminários
Seminário de Álgebra
Nash blowup fails to resolve singularities in dimensions four and higher
Expositor: Alvaro Liendo
SALA 228
Hironaka’s celebrated resolution of singularities in characteristic zero proceeds by systematically blowing up carefully chosen subvarieties. An alternative, introduced by Nash, replaces these ad hoc choices with a canonical birational transformation: the (normalized) Nash blowup. A longstanding question is whether iterating these Nash blowups eventually resolves singularities.
In this talk, based on recent joint work with Federico Castillo, Daniel Duarte, and Maximiliano Leyton-Álvarez, we show that iterated (normalized) Nash blowups fail to resolve singularities for algebraic varieties of dimension four or higher over an algebraically closed field of arbitrary characteristic. Our proof relies on explicit constructions of toric varieties that exhibit this obstruction. Consequently, such a purely canonical approach to resolution of singularities cannot be achieved in dimension four or greater.
Seminário de Álgebra
Sheaf Theory and Machine Learning: New Perspectives on Graph Neural Networks
Expositor: Ana Luiza da Conceição Tenorio
SALA 228
On the one hand, we have sheaves, which are often described in a complex way and have applications in abstract areas such as Algebraic Topology, Algebraic Geometry, and Categorical Logic. On the other hand, we have Graph Neural Networks (GNNs), which are widely used in machine learning for graph-structured data, with real-world applications in molecular modeling, recommender systems, and more.
In this talk, we will present sheaves on a fixed graph, which admit a simple characterization known as cellular sheaves, and discuss the advantages of using them to improve GNN models, where we have to generalize the graph Laplacian to the Hodge Laplacian on graphs. We will explore two situations where cellular sheaves are particularly useful: when the input graphs exhibit “heterophilic” characteristics (i.e., graphs in which connected nodes do not share similar attributes), and when we want to avoid the phenomenon where every node converges to the same representation (oversmoothing). These ideas were studied in the 2022 paper "Neural Sheaf Diffusion: A Topological Perspective on Heterophily and Oversmoothing in GNNs" by Cristian Bodnar et al.
No prior knowledge of Category Theory or Neural Networks will be assumed. The goal is to present a very recent application of sheaf theory in machine learning to a broad audience.