Conference Name: Workshop on Computational Techniques in Physical Sciences and Data Science
Organizations: Weiguo Gao,Rujun Jiang, Ke Wei, Shuqin Zhang
Host: School of Mathematical Sciences, Fudan University; School of Data Science, Fudan University
Venue: 光华东主楼1801
Opening Time: 2018年5月10日
Closing Time: 2018年5月10日
Abstract:

Workshop on Computational Techniques in Physical Sciences and Data Science

1801 East Guanghua Main Building, 2:00pm-5:00pm, May 10, 2018

Invited Talks:

1.     Xiuyuan Cheng, Duke University

Title: Convolutional Neural Network with Structured Filters

Abstract: Filters in a Convolutional Neural Network (CNN) contain model parameters learned from enormous amounts of data. The properties of convolutional filters in a trained network directly affect the quality of the data representation being produced. In this talk, we introduce a framework for decomposing convolutional filters over a truncated expansion under pre-fixed bases, where the expansion coefficients are learned from data. Such a structure not only reduces the number of trainable parameters and computation load but also explicitly imposes filter regularity by bases truncation. Apart from maintaining prediction accuracy across image classification datasets, the decomposed-filter CNN also produces a stable representation with respect to input variations, which is proved under generic assumptions on the basis expansion. Joint work with Qiang Qiu, Robert Calderbank, and Guillermo Sapiro.

2.     Yingzhou Li, Duke University

Title: Sparse Factorizations and Scalable Algorithms for Elliptic Differential Operators

Abstract: Sparse factorizations and scalable algorithms for elliptic differential is presented in the talk. The operators are solved by the distributed-memory hierarchical interpolative factorization (DHIF). By exploiting locality and certain low-rank properties of the elliptic differential operators, the hierarchical interpolative factorization achieves quasi-linear complexity for factorizing the discrete positive definite elliptic differential operator and linear complexity for solving the associated linear system. In this talk, the DHIF is introduced as a scalable and distributed-memory implementation of the hierarchical interpolative factorization. The DHIF organizes the processes in a hierarchical structure and keeps the communication as local as possible. The computation complexity is O (N/P log N) and O (N/P) for constructing and applying the DHIF, respectively, where N is the size of the problem and P is the number of processes. Extensive numerical examples are performed on the NERSC Edison system with up to 8192 processes. The numerical results agree with the complexity analysis and demonstrate the efficiency and scalability of the DHIF.

3.     Meiyue Shao, Lawrence Berkeley National Laboratory

Title: Conquering Algebraic Nonlinearity in Nonlinear Eigenvalue Problems

Abstract: We present a linearization scheme for solving algebraic nonlinear eigenvalue problem $T (/lambda)x=0$.  By algebraic, we mean each entry of $T (/lambda)$ is an algebraic function of $/lambda$. In contrast to existing approximation-based approaches, which typically aim at finding only a few eigenvalues, our linearization scheme can be used to compute all eigenvalues counting algebraic multiplicity. As an example, we apply this linearization scheme to analyze the gun problem from the NLEVP collection.

Organizers:

         Weiguo Gao, School of Mathematical Sciences, Fudan University

         Rujun Jiang, School of Data Science, Fudan University

         Ke Wei, School of Data Science, Fudan University

         Shuqin Zhang, School of Mathematical Sciences, Fudan University

Support:

         National Science Foundation of China (NSFC)

Annual Speech Directory: No.6

220 Handan Rd., Yangpu District, Shanghai ( 200433 )| Operator:+86 21 65642222

Copyright © 2016 FUDAN University. All Rights Reserved