Link: NAM Paper
Summary: Fitting a Neural netowkr (MLP) for each feature and summing their output for final model prediction.
Link: NodeGAM Paper
Summary: NodeGAM introduces neural GAM (NODE-GAM) and neural GA2M (NODE-GA2M), which scale well on large datasets and maintain interpretability. Using NODE (Neural oblivious decision ensembles) as shape functions.
Link: NBM Paper
Summary: Neural Basis Models (NBMs) use basis decomposition of shape functions, enabling scalable and interpretable models that excel in accuracy and efficiency for large-scale data with high-dimensional features.
Link: SPAM Paper
Summary: Scalable Polynomial Additive Models (SPAM) leverage tensor rank decompositions of polynomials, outperforming current interpretable models and matching DNN/XGBoost performance.
Link: Sparse NAM Paper
Summary: Sparse Neural Additive Models (SNAM) enhance Neural Additive Models (NAMs) by incorporating group sparsity regularization for feature selection and improved generalization. SNAM provably achieves zero training loss and exact feature selection, demonstrating good accuracy and efficiency.
Link: SIAN Paper
Summary: Sparse Interaction Additive Networks (SIAN) identifies necessary feature combinations. SIAN achieves competitive performance and finds an optimal tradeoff between neural network capacity and simpler model generalizability.
Link: Concurvity Regularization Paper
Summary: Concurvity Regularization addresses the issue of concurvity in Generalized Additive Models (GAMs) by penalizing pairwise correlations of non-linearly transformed feature variables. This improves interpretability without compromising prediction quality, reducing variance in feature importances.
Link: NATT Paper
Summary: Neural Additive Tabular Transformer Networks (NATT) combine the interpretability of additive neural networks with the predictive power of Transformer models. Categorical features are modelled with Transformer Encoders.
Link: NAMLSS Paper
Summary: Neural Additive Models for Location Scale and Shape (NAMLSS) integrate distributional regression with additive neural networks, extending beyond mean response predictions.
Link: NAIM Paper
Summary: Neural Additive Image Models (NAIM) utilize Neural Additive Models and Diffusion Autoencoders to identify latent image semantics and their effects. NAIM demonstrates the ability to explore complex image effects, with a case study highlighting the impact of image characteristics on Airbnb pricing.
Link: SNAM Paper
Summary: Structural Neural Additive Models (SNAMs) enhance the interpretability of neural networks by combining classical statistical methods (Splines) with neural applications. Fitting NAMs with Splines instead of MLPs and optimizing knot locations.
Link: Semi-Structured Distributional Regression Paper
Summary: This framework combines structured regression models with deep neural networks, addressing identifiability issues through an orthogonalization cell. It enables stable estimation and interpretability, demonstrated through numerical experiments and real-world applications.