{"id":363,"date":"2020-07-09T16:51:59","date_gmt":"2020-07-09T16:51:59","guid":{"rendered":"http:\/\/sites.rutgers.edu\/ahmed-elgammal\/?page_id=363"},"modified":"2025-07-07T23:31:04","modified_gmt":"2025-07-07T23:31:04","slug":"publications","status":"publish","type":"page","link":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/publications\/","title":{"rendered":"Research Papers"},"content":{"rendered":"<h3>Peer-reviewed publications (Featured)<\/h3>\n<p>&nbsp;<\/p>\n<p><strong>2024<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>Kunpeng Song, Tingbo Hou, Zecheng He, Haoyu Ma, Jialiang Wang, Animesh Sinha, Sam Tsai, Yaqiao Luo, Xiaoliang Dai, Li Chen, Xide Xia, Peizhao Zhang, Peter Vajda, Ahmed Elgammal, Felix Juefei-Xu,<\/p>\n<p>&#8220;DirectorLLM for Human-Centric Video Generation&#8221;,<\/p>\n<p>arXiv:2412.14484.<\/p>\n<p>&nbsp;<\/p>\n<p>Kunpeng Song, Yizhe Zhu, Bingchen Liu, Qing Yan, Ahmed Elgammal, Xiao Yang<\/p>\n<p>&#8220;MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation&#8221;,<\/p>\n<p>ECCV 2024<\/p>\n<p>&nbsp;<\/p>\n<p>Faizan Farooq Khan, Diana Kim, Divyansh Jha, Youssef Mohamed, Hanna H Chang, Ahmed Elgammal, Luba Elliott, Mohamed Elhoseiny<\/p>\n<p>&#8220;AI Art Neural Constellation: Revealing the Collective and Contrastive State of AI-Generated and Human Art&#8221;,<\/p>\n<p>CVPR 2024<\/p>\n<p>&nbsp;<\/p>\n<p>Kunpeng Song, Ligong Han, Bingchen Liu, Dimitris Metaxas, Ahmed Elgammal<\/p>\n<p>\u201cDiffusion Guided Domain Adaptation of Image Generators\u201d,<\/p>\n<p>WACV 2024, arXiv:2212.04473<\/p>\n<p>&nbsp;<\/p>\n<p>Jing Huang, Ahmed Elgammal,<\/p>\n<p>\u201cTowards Artist Recognition Based on Material Rendering. A Case Study for Recognition of Rembrandt and Van Dyck\u201d<\/p>\n<p>Computer Vision and Image Analysis of Art (CVAA) 2024<\/p>\n<p>&nbsp;<\/p>\n<p><strong>2022<\/strong><\/p>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"1\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<p>&nbsp;<\/p>\n<p>Mark Gotham, Kunpeng Song, Nicolai B\u00f6hlefeld, Ahmed Elgammal<\/p>\n<p><a href=\"https:\/\/zenodo.org\/records\/7088335\">\u201cBeethoven X: Es k\u00f6nnte sein! (It could be!)\u201d<\/a>,<\/p>\n<p>Conference on AI Music Creativity, \u00a0AIMC 2022<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<p>A. Elgammal, M. Mazzone<\/p>\n<p>\u201c<a href=\"https:\/\/raco.cat\/index.php\/Artnodes\/article\/view\/374040\">Artists, Artificial Intelligence and Machine-based Creativity in<\/a> <a href=\"https:\/\/playform.io\" target=\"_blank\" rel=\"noopener\">Playform<\/a>\u201d,<\/p>\n<p>Artnodes, July 2020<\/p>\n<p>&nbsp;<\/p>\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<p><span style=\"font-weight: 400\">Diana Kim, Ahmed Elgammal, and Marian Mazzone, <\/span><\/p>\n<p><span style=\"font-weight: 400\">\u201cProxy Learning of Visual Concepts of Fine Art Paintings from Styles through Language Models\u201d, <\/span><\/p>\n<p>AAAI <span style=\"font-weight: 400\">2022<\/span><\/p>\n<p>&nbsp;<\/p>\n<div class=\"s-component s-text\">\n<p>Mahyar Khayatkhoei, Ahmed Elgammal<\/p>\n<p>\u201cSpatial Frequency Bias in Convolutional Generative Adversarial Networks\u201d,<\/p>\n<p>AAAI 2022<\/p>\n<p>&nbsp;<\/p>\n<p><strong>2021<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>Bingchen Liu, Yizhe Zhu, Kunpeng Song, Ahmed Elgammal<\/p>\n<p>\u201cTowards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis\u201d,<\/p>\n<p><span class=\"C9DxTc \">In Collaboration with <\/span><a class=\"XqQF9c\" href=\"https:\/\/playform.io\/\" target=\"_blank\" rel=\"noopener\"><span class=\"C9DxTc aw5Odc \">Playform AI<\/span><\/a><\/p>\n<p>ICLR 2021<\/p>\n<p>&nbsp;<\/p>\n<p>Bingchen Liu, Yizhe Zhu, Kunpeng Song, Ahmed Elgammal<\/p>\n<p>\u201cSelf-Supervised Sketch-to-Image Synthesis\u201d,<\/p>\n<p><span class=\"C9DxTc \">In Collaboration with <\/span><a class=\"XqQF9c\" href=\"https:\/\/playform.io\/\" target=\"_blank\" rel=\"noopener\"><span class=\"C9DxTc aw5Odc \">Playform AI<\/span><\/a><\/p>\n<p>AAAI 2021<\/p>\n<p>&nbsp;<\/p>\n<p><strong>2020<\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>Bingchen Liu, Kunpeng Song, Yizhe Zhu, Gerard de Melo, Ahmed Elgammal<\/p>\n<p>\u201cTIME: Text and Image Mutual-Translation Adversarial Networks\u201d,<\/p>\n<p><span class=\"C9DxTc \">In Collaboration with <\/span><a class=\"XqQF9c\" href=\"https:\/\/playform.io\/\" target=\"_blank\" rel=\"noopener\"><span class=\"C9DxTc aw5Odc \">Playform AI<\/span><\/a><\/p>\n<p>AAAI 2021<\/p>\n<p>&nbsp;<\/p>\n<p>Bingchen Liu, Kunpeng Song, Ahmed Elgammal<\/p>\n<p>\u201cSketch-to-Art: Synthesizing Stylized Art Images From Sketches\u201d<\/p>\n<p><span class=\"C9DxTc \">In Collaboration with <\/span><a class=\"XqQF9c\" href=\"https:\/\/playform.io\/\" target=\"_blank\" rel=\"noopener\"><span class=\"C9DxTc aw5Odc \">Playform AI<\/span><\/a><\/p>\n<p>ACCV 2020<\/p>\n<p>&nbsp;<\/p>\n<p>Bingchen Liu, Yizhe Zhu, Zuohui Fu, Gerard de Melo and Ahmed Elgammal<\/p>\n<p>\u201cOOGAN: Disentangling GAN with One-Hot Sampling and Orthogonal Regularization\u201d<\/p>\n<p>AAAI 2020<\/p>\n<p>&nbsp;<\/p>\n<p><strong>2019<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>Yizhe Zhu, Jianwen Xie, Zhiqiang Tang, Xi Peng , Ahmed Elgammal<\/p>\n<p>\u201cSemantic-Guided Multi-Attention Localization for Zero-Shot Learning,\u201d<\/p>\n<p>NeurIPS 2019.<\/p>\n<p>&nbsp;<\/p>\n<p>Yizhe Zhu, Jianwen Xie , Bingchen Liu, Ahmed Elgammal<\/p>\n<p>\u201cLearning Feature-to-Feature Translator by Alternating Back-Propagation for Zero-Shot Learning,\u201d<\/p>\n<p>ICCV 2019<\/p>\n<p>&nbsp;<\/p>\n<p>J. Zhang, K. Shih, A. Elgammal, A. Tao, B. Catanzaro.<\/p>\n<p>\u201cGraphical Contrastive Losses for Scene Graph Generation.\u201d<\/p>\n<p>CVPR 2019<\/p>\n<p>&nbsp;<\/p>\n<p>S. Geng*, J. Zhang*, H. Zhang, A. Elgammal, D. N. Metaxas,<\/p>\n<p><strong>2nd Place Solution to the GQA Challenge 2019<\/strong>.<\/p>\n<p>CVPR Workshops 2019<\/p>\n<p>&nbsp;<\/p>\n<p>J. Zhang, Y. Kalantidis, M. Rohrbach, M. Paluri, A. Elgammal, M. Elhoseiny.<\/p>\n<p>\u201cLarge-Scale Visual Relationship Understanding\u201d<\/p>\n<p>AAAI 2019<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal, M. Mazzone<\/p>\n<p>\u201cArt, Creativity and the Potential of Artificial Intelligence\u201d,<\/p>\n<p>Arts, 2019, 8, 26. Special issue on The Machine as Artist (for the 21st Century.<\/p>\n<p>&nbsp;<\/p>\n<p>D. Kim, A. Elgammal, M. Mazzone<\/p>\n<p>\u201cComputational Analysis of Content in Fine Art Paintings\u201d<\/p>\n<p>Proceedings of the 10th International Conference on Computational Creativity<\/p>\n<p>ICCC 2019<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"2\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2018<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>[M. Khayatkhoei, M. Singh, A. Elgammal<\/p>\n<p>\u201cDisconnected Manifold Learning for Generative Adversarial Networks\u201d<\/p>\n<p>NeruIPS 2018<\/p>\n<p>&nbsp;<\/p>\n<p>Zhang, K. Shih, A. Tao, B. Catanzaro, A. Elgammal.<\/p>\n<p>\u201cAn Interpretable Model for Scene Graph Generation\u201d.<\/p>\n<p>NeurIPS Workshops, 2018<\/p>\n<p>&nbsp;<\/p>\n<p>Zhang, K. Shih, A. Tao, B. Catanzaro, A. Elgammal.<\/p>\n<p>\u201cIntroduction to the <strong>1st Place Winning Model of Open Images Relationship Detection Challenge<\/strong>\u201d.<\/p>\n<p>ECCV Workshops, 2018<\/p>\n<p>&nbsp;<\/p>\n<p>Y. Zhu, M. Elhoseiny, B. Liu, X. Peng, and A. Elgammal,<\/p>\n<p>\u201cA generative adversarial approach for zero-shot learning from noisy texts\u201d<\/p>\n<p>CVPR 2018<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal, B. Liu, D. Kim, M. Elhoseiny, M. Mazzone<\/p>\n<p>&#8220;The Shape of Art History in the Eyes of the Machine&#8221;<\/p>\n<p>AAAI 2018<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal, Y. Kang, M. Den Leeuv<\/p>\n<p>&#8220;Picasso, Matisse, or a Fake? Automated Analysis of Drawings at the Stroke Level for Attribution and Authentication&#8221;<\/p>\n<p>AAAI 2018<\/p>\n<p>&nbsp;<\/p>\n<p>D. Kim, B. Liu, A. Elgammal, M. Mazzone<\/p>\n<p>&#8220;Finding Principal Semantics of Style in Art\u201d<\/p>\n<p>The 12th IEEE International Conference on Semantic Computing (ICSC),<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"3\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2017<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>M. Elhoseiny, S. Cohen, W. Chang, B. Price, and A. Elgammal,<br \/>\n\u201cTowards Richer and Scalable Understanding of Facts in Images\u201d<br \/>\nAAAI 2017<\/p>\n<p>&nbsp;<\/p>\n<p>M. Elhoseiny, A. Elgammal, and B. Saleh,<br \/>\n\u201cWrite a Classifier: Predicting Visual Classifiers from Unstructured Text Descriptions\u201d,<br \/>\nTPAMI &#8211; December 2017<\/p>\n<p>&nbsp;<\/p>\n<p>S. Huang, A. Elgammal, and D. Yang,<br \/>\n\u201cOn the Effect of Hyperedge Weights on Hypergraph Learning\u201d<br \/>\nImage and Vision Computing &#8211; 2017<\/p>\n<p>&nbsp;<\/p>\n<p>Y. Zhu and A. Elgammal <i>\u201c<\/i><\/p>\n<p>&#8220;A Multilayer-Based Framework for Online Background Subtraction With Freely Moving Cameras\u201d<\/p>\n<p>ICCV 2017<\/p>\n<p>&nbsp;<\/p>\n<p>J. Zhang, M. Elhoseiny, W. Chang, S. Cohen, A. Elgammal,<\/p>\n<p>\u201cRelationship Proposal Networks\u201d<\/p>\n<p>CVPR 2017<\/p>\n<p>&nbsp;<\/p>\n<p>M. Elhoseiny, Y. Zhu, H. Zhang, A. Elgammal,<\/p>\n<p>\u201cLink the head to the \u2018peak\u2019: Zero Shot Learning from Noisy Text descriptions at Part Precision\u201d<\/p>\n<p>CVPR 2017<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal, B. Liu, M. Elhoseiny and M. Mazzone<\/p>\n<p>\u201cCAN: Creative Adversarial Networks\u201d,<\/p>\n<p>Proceedings of the International Conference on Computational Creativity<\/p>\n<p>ICCC 2017<\/p>\n<p>&nbsp;<\/p>\n<p>M. Elhoseiny, S. Cohen, W. Chang, B. Price, and A. Elgammal,<\/p>\n<p>\u201cTowards Richer and Scalable Understanding of Facts in Images\u201d,<\/p>\n<p>AAAI 2017<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"4\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2016<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>B. Saleh, A. Elgammal, and J. Feldman,<br \/>\n\u201cThe Role of Typicality in Object Classification: Improving The Generalization Capacity of Convolutional Neural Networks\u201d<br \/>\nIJCAI 2016<\/p>\n<p>&nbsp;<\/p>\n<p>M. Elhoseiny, T. El-Gaaly, A. Bakry, and A. Elgammal,<br \/>\n\u201cA Comparative Analysis and Study of Multiview Convolutional Neural Network Models for Joint Object Categorization and Pose Estimation\u201d,<br \/>\nICML 2016<\/p>\n<p>&nbsp;<\/p>\n<p>H. Zhang, T. Xu, M. Elhoseiny, X. Huang, S. Zhang, A. Elgammal, D. Metaxas,<br \/>\n\u201cSPDA-CNN: Unifying Semantic Part Detection and Abstraction for Fine-grained Recognition\u201d<br \/>\nCVPR 2016<\/p>\n<p>&nbsp;<\/p>\n<p>A. Bakry, M. Elhoseiny, T. El-Gaaly, and A. Elgammal,<br \/>\n\u201cDigging Deep into the layers of CNNs: In Search of How CNNs Achieve View Invariance\u201d<br \/>\nICLR 2016<\/p>\n<p>&nbsp;<\/p>\n<p>B. Saleh, A. Elgammal, J. Feldman and A. Farhadi<br \/>\n\u201cToward a Taxonomy and Computational Models of Abnormalities in Images\u201d,<br \/>\nAAAI 2016<br \/>\n<strong><em>Recipient of the Outstanding Student Paper Award<\/em><\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>M. Ehoseiny and J. Liu and H. Cheng and H. Sawhney and A. Elgammal<br \/>\n\u201cZero Shot Event Detection by Multimodal Distributional Semantic Embedding of Videos\u201d,<br \/>\nAAAI 2016<\/p>\n<p>&nbsp;<\/p>\n<p>S. Huang, Y. Yu, D. Yang, A. Elgammal and D. Yang,<br \/>\n\u201cCollaborative Graph Embedding: A Simple Way to Generally Enhance Subspace Learning Algorithms\u201d,<br \/>\nIEEE Transactions on Circuits and Systems for Video Technology (TCSVT),<br \/>\nOctober 2016<\/p>\n<p>&nbsp;<\/p>\n<p>P. Vepakomma and A. Elgammal<br \/>\n\u201cA fast algorithm for manifold learning by posing it as a symmetric diagonally dominant linear system\u201d,<br \/>\nApplied and Computational Harmonic Analysis Journal<br \/>\nMarch 2016<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"5\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2015<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>A.Elgammal and B. Saleh<br \/>\n\u201cQuantifying Creativity in Art Networks\u201d,<br \/>\nthe 6th International Conference on Computational Creativity (ICCC\u201915)<br \/>\nM. Elhoseiny and A. Elgammal<br \/>\n\u201cGeneralized Twin Gaussian Processes using Sharma-Mittal Divergence\u201d<br \/>\nMachine Learning Journal<br \/>\n\u2013 Also will be presented in ECML PPKDD\u201915<\/p>\n<p>&nbsp;<\/p>\n<p>M. Elhoseiny, S. Huang, and A Elgammal<br \/>\n\u201cWeather Classification with Deep Convolution Neural Networks\u201d,<br \/>\nICIP 15<\/p>\n<p>&nbsp;<\/p>\n<p>S. Huang, M. Elhoseiny, A. Elgammal, D. Yang<br \/>\n\u201cLearning Hypergraph-regularized Attribute Predictors\u201d<br \/>\nCVPR 15<\/p>\n<p>&nbsp;<\/p>\n<p>T. El-Gaaly, V. Froyen, A. Elgammal, J. Feldman, M. Singh<br \/>\n\u201cA Bayesian Approach to Perceptual 3D Object-Part Decomposition Using Skeleton-Based Representations\u201d<br \/>\nAAAI 15<\/p>\n<p>&nbsp;<\/p>\n<p>H. Zhang, T. El-Gaaly, Z. Jiang, A. Elgammal<br \/>\n\u201cFactorization on View-Object Manifold for Joint Object Recognition and Pose Estimation\u201d<br \/>\nCVIU 2015<\/p>\n<p>&nbsp;<\/p>\n<p>X. Peng, J. Huang; Q. Hu; S. Zhang; A. Elgammal, and D. Metaxas<br \/>\n\u201cFrom Circle to 3-Shpere: Robust Head Pose Estimation by Instance Parameterization\u201d<br \/>\nCVIU 2015<\/p>\n<p>&nbsp;<\/p>\n<p>M. Elhoseiny and A. Elgammal<br \/>\n\u201cText to Multi-level MindMaps: A Novel Method for Hierarchical Visual Abstraction of Natural Language Text\u201d<br \/>\nMultimedia Tools and Applications Journal, Springer, April 2015<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal<br \/>\n\u201cHomeomorphic Manifold Analysis (HMA): Untangling Complex Manifolds\u201d<br \/>\nAdvances in Imaging and Electron Physics<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"6\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2014<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>S. Huang, A. Elgammal, M. Elhoseiny, D. Yang, X. Zhang<br \/>\n\u201cImproving Non-Negative Matrix Factorization via Ranking Its Bases\u201d,<br \/>\nICIP 2014<\/p>\n<p>&nbsp;<\/p>\n<p>E. L. Spratt and A. Elgammal<br \/>\n\u201cComputational Beauty: Aesthetic Judgment at the Intersection of Art and Science\u201d<br \/>\nWhen Vision Meets Art (VisArt) Workshop 2014<\/p>\n<p>&nbsp;<\/p>\n<p>B. Saleh, K. Abe, R. Arora, A. Elgammal<br \/>\n\u201cToward Automated Discovery of Artistic Influence\u201d<br \/>\nMultimedia Tools and Applications \u2013 Springer \u2013 2014<\/p>\n<p>&nbsp;<\/p>\n<p>T. Senlet, T. El-Gaaly, A. Elgammal,<br \/>\n\u201cHierarchical Semantic Hashing: Visual Localization from Buildings on Maps\u201d<br \/>\nICPR 2014<\/p>\n<p>&nbsp;<\/p>\n<p>T. El-Gaaly, M. Torki, A. Elgammal<br \/>\n\u201cSpatial-Visual Label Propagation for Local Feature Classification\u201d<br \/>\nICPR 2014.<\/p>\n<p>&nbsp;<\/p>\n<p>A. Bakry and A. Elgammal<br \/>\n\u201cUntangling Object-View Manifold for Multiview Recognition and Pose Estimation\u201d<br \/>\nECCV 2014<\/p>\n<p>&nbsp;<\/p>\n<p>C. Tonde and A. Elgammal<br \/>\n\u201cSimultaneous Twin Kernel Learning using Polynomial Transformations for Structured Prediction\u201d<br \/>\nCVPR 2014<\/p>\n<p>&nbsp;<\/p>\n<p>B. Saleh, K. Abe and A. Elgammal<br \/>\n\u201cKnowledge Discovery of Artistic Influences: A Metric Learning Approach\u201d<br \/>\nICCC 2014<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<p>&nbsp;<\/p>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"7\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2013<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>A. Elqursh and A. Elgammal<br \/>\nOnline Motion Segmentation using Dynamic Label Propagation\u201d<br \/>\nICCV 2013<\/p>\n<p>&nbsp;<\/p>\n<p>M. Elhoseiny, B. Saleh, and A.Elgammal<br \/>\n\u201cWrite a Classifier: Zero Shot Learning Using Purely Textual Descriptions\u201d<br \/>\nICCV 2013<\/p>\n<p>&nbsp;<\/p>\n<p>Haopeng Zhang, Zhiguo Jiang, and Ahmed Elgammal<br \/>\n\u201cVision-Based Pose Estimation for Cooperative Space Objects\u201d,<br \/>\nActa Astronautica Volume 91, October\u2013November 2013<\/p>\n<p>&nbsp;<\/p>\n<p>Sheng Huang, Ahmed Elgammal, and Dan Yang<br \/>\n\u201cLearning Speed Invariant Gait Template via Thin Plate Spline Kernel Manifold Fitting\u201d<br \/>\nBMVC 2013<\/p>\n<p>&nbsp;<\/p>\n<p>Haopeng Zhang, Tarek El-Gaaly, Ahmed Elgammal, Zhiguo Jiang<br \/>\n\u201cJoint Object and Pose Recognition Using Homeomorphic Manifold Analysis\u201d<br \/>\nAAAI 2013<\/p>\n<p>&nbsp;<\/p>\n<p>B. Saleh, A. Farhadi, A. Elgammal<br \/>\n\u201cObject-Centric Anomaly Detection by Attribute-Based Reasoning\u201d<br \/>\nCVPR 2013<br \/>\nA. Bakry, A. Elgammal<br \/>\n\u201cManifold Kernel Partial Least Squares for Lipreading and Speaker identification\u201d<br \/>\nCVPR 2013<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal and C.-S Lee<br \/>\n\u201cHomeomorphic Manifold Analysis (HMA): Generalized Separation of Style and Content on Manifolds\u201d,<br \/>\nImage and Vision Computing Journal, April 2013 &#8211;<em><strong> Editor Choice Article<\/strong><\/em><\/p>\n<p>&nbsp;<\/p>\n<p>I. Chakraborty, A. Elgammal, and R. Burd<br \/>\n\u201cVideo based Activity Recognition in Trauma Resuscitation\u201d<br \/>\nFG 2013<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"8\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2012<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>T. Senlet and A. Elgammal<br \/>\n\u201cSegmentation of Occluded Sidewalks in Satellite Images\u201d,<br \/>\nICPR 2012<\/p>\n<p>&nbsp;<\/p>\n<p>T. El-Gaaly, M. Torki, A. Elgammal and M. Singh<br \/>\n\u201cRGBD Object Pose Recognition Using Local-Global Multi-Kernel Regression\u201d<br \/>\nICPR 2012<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elqursh and A. Elgammal<br \/>\n\u201cVideo Figure Ground Labeling\u201d<br \/>\nICPR 2012<\/p>\n<p>&nbsp;<\/p>\n<p>R. Arora and A. Elgammal<br \/>\n\u201cTowards Automated Classification of Fine-Art Painting Style: A Comparative Study\u201d<br \/>\nICPR 2012<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elqursh and A. Elgammal<br \/>\n\u201cSingle Axis Relative Rotation from Orthogonal Lines\u201d<br \/>\nICPR 2012<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elqursh and A. Elgammal<br \/>\n\u201cOnline Moving Camera Background Subtraction\u201d<br \/>\nECCV 2012<\/p>\n<p>&nbsp;<\/p>\n<p>T. Senlet and A. Elgammal<br \/>\n\u201cSatellite Image Based Precise Robot Localization on Sidewalks\u201d<br \/>\nICRA 2012<\/p>\n<p>&nbsp;<\/p>\n<p>C.-S. Lee and A. Elgammal<br \/>\n\u201cStyle Adaptive Contour Tracking of Human Gait Using Explicit Manifold Models\u201d<br \/>\nMachine vision and applications journal,<br \/>\nMay 2012<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"9\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2011<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>E. K. Gnang, A. Elgammal, V. Retakh<br \/>\n\u201cA Spectral Theory for Tensors\u201d<br \/>\nThe Annales de la Facult\u00e9 des Sciences de Toulouse, S\u00e9r. 6, 20 no. 4 (2011), p. 801-841.<br \/>\nAvailable at arXiv::1008.:2923v4 [math.SP]<\/p>\n<p>&nbsp;<\/p>\n<p>M. Torki and A. Elgammal<br \/>\n\u201cRegression from Local Features for Viewpoint and Posture Estimation\u201d<br \/>\nICCV\u201911<\/p>\n<p>&nbsp;<\/p>\n<p>T. Senlet and A. Elgammal<br \/>\n\u201cA Framework for Global Vehicle Localization Using Stereo Images and Satellite and Road Maps\u201d<br \/>\n2nd IEEE Workshop on Computer Vision in Vehicle Technology: From Earth to Mars, in conjunction with ICCV 2011<\/p>\n<p>&nbsp;<\/p>\n<p>T. Parag and A. Elgammal<br \/>\n\u201cHigher Order Markov Networks for Model Estimation\u201d<br \/>\nInternational Symposium on Visual Computing (ISVC\u201911)<\/p>\n<p>&nbsp;<\/p>\n<p>T. Parag and A. Elgammal<br \/>\n\u201cSupervised Hypergraph Labeling\u201d<br \/>\nCVPR\u201911<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elqursh and A. Elgammal<br \/>\n\u201cLine-Based Relative Pose Estimation\u201d<br \/>\nCVPR\u201911<\/p>\n<p>&nbsp;<\/p>\n<p>S. Smaldone, C. Tonde, V. K. Ananthanarayanan, A. Elgammal, and L. Iftode<br \/>\n\u201cThe Cyber-Physical Bike: A Step Towards Safer Green Transportation\u201d<br \/>\nHotMobile\u201911<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"10\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2010<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>M. Torki, A. Elgammal, and C-S. Lee<br \/>\n&#8220;Learning a Joint Manifold Representation from Multiple Data Sets\u201d<br \/>\nICPR\u201910<\/p>\n<p>&nbsp;<\/p>\n<p>I. Chakraborty and A. Elgammal<br \/>\n&#8220;Object Localization by Propagating Connectivity via Superfeatures\u201d<br \/>\nICPR\u201910<\/p>\n<p>&nbsp;<\/p>\n<p>M. Torki and A. Elgammal<br \/>\n\u201cOne-Shot Multi-Set Non-rigid Feature-Spatial Matching\u201d<br \/>\nCVPR\u201910<\/p>\n<p>&nbsp;<\/p>\n<p>M. Torki and A. Elgammal<br \/>\n\u201cPutting Local Features on a Manifold\u201d<br \/>\nCVPR\u201910<\/p>\n<p>&nbsp;<\/p>\n<p>C-S. Lee and A. Elgammal<br \/>\n\u201cCoupled Visual and Kinematics Manifold Models for Human Motion Analysis\u201d<br \/>\nInternational Journal on Computer Vision. Volume 87, Numbers 1-2, March 2010.<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"11\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2009<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>I. Chakraborty and A. Elgammal<br \/>\n&#8220;Contour segment matching by integrating intra and inter shape cues of objects&#8221;;<br \/>\nBMVC\u201909,<\/p>\n<p>&nbsp;<\/p>\n<p>C-S. Lee and A. Elgammal<br \/>\n\u201cDynamic shape outlier detection for human locomotion\u201d<br \/>\nComputer Vision and Image Understanding Journal (CVIU). Volume 113, Issue 3, March 2009.<\/p>\n<p>&nbsp;<\/p>\n<p>C-S. Lee and A. Elgammal<br \/>\n\u201cTracking People on a Torus\u201d<br \/>\nIEEE transactions on Pattern Analysis and Machine Intelligence, March 2009<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"12\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2008<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>Z. Zhao and A. Elgammal<br \/>\n&#8220;Information Theoretic Key Frame Selection for Action Recognition&#8221;<br \/>\nBMVC\u201908<\/p>\n<p>&nbsp;<\/p>\n<p>Z. Zhao and A. Elgammal<br \/>\n\u201cHuman Activity Recognition from Frames\u2019 Spatiotemporal Representation\u201d<br \/>\nICPR\u201908<\/p>\n<p>&nbsp;<\/p>\n<p>I. Chakraborty and A. Elgammal<br \/>\n\u201cObject Detection based on Substructure Grouping\u201d<br \/>\nICPR\u201908<\/p>\n<p>&nbsp;<\/p>\n<p>Z. Zhao and A. Elgammal<br \/>\n\u201cSpatiotemporal Pyramid Representation for Recognition of Facial Expressions and Hand Gestures\u201d<br \/>\nFGR\u201908<\/p>\n<p>&nbsp;<\/p>\n<p>T. Paraq, F. Porikli and A. Elgammal<br \/>\n\u201cBoosting Adaptive Linear Weak Classi\ufb01ers for Online learning and Tracking\u201d<br \/>\nCVPR\u201908<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"13\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2007<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>C.-S. Lee and A. Elgammal<br \/>\n&#8220;Modeling View and Posture Manifolds for Tracking&#8221;<br \/>\nICCV&#8217;07<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal and C.-S. Lee<br \/>\n\u201cNonlinear Manifold Learning for Dynamic Shape and Dynamic Appearance\u201d<br \/>\nComputer Vision and Image Understanding (CVIU) special issue on generative model based vision. April 2007<\/p>\n<p>&nbsp;<\/p>\n<p>C.-S. Lee and A. Elgammal<br \/>\n\u201cTracking People on a Torus\u201d<br \/>\ntechnical report DCS &#8211; TR611<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"14\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2006<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>C.-S. Lee A. Elgammal<br \/>\n\u201cBody Pose Tracking From Uncalibrated Camera Using Supervised Manifold Learning\u201d<br \/>\nNIPS- Workshop on Evaluation of Articulated Human Motion and Pose Estimation. EHuM06.<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal<br \/>\n\u201cHuman-centered Multimedia, Representations and Challenges\u201d<br \/>\nThe 1st ACM international Workshop on Human-Centered Multimedia HCM &#8217;06 &#8212; with ACM- Multimedia 2006.<br \/>\n<em><strong>Invited Position Paper.<\/strong><\/em><\/p>\n<p>&nbsp;<\/p>\n<p>T. Parag and A. Elgammal<br \/>\n\u201cUnsupervised Learning of Boosted Tree Classifier using Graph Cuts for Hand Pose Recognition\u201d<br \/>\nBMVC&#8217;06,<\/p>\n<p>&nbsp;<\/p>\n<p>Z. Zhao and A. Elgammal<br \/>\n\u201cA statistically selected Part-Based Probabilistic Model for Object Recognition\u201d,<br \/>\nInternational Workshop on Intelligent Computing in Pattern Analysis aand Synthesis, IWICPAS 2006.<\/p>\n<p>&nbsp;<\/p>\n<p>R. Isukapalli, A. Elgammal and R. Greiner<br \/>\n\u201cLearning Policies for Efficiently Identifying Objects of Many Classes\u201d<br \/>\nICPR&#8217;06<\/p>\n<p>I. Chakraborty and A. Elgammal<br \/>\n\u201cCombining Low and High Level Features for Object Recognition\u201d<br \/>\nICPR&#8217;06<\/p>\n<p>&nbsp;<\/p>\n<p>C.-S. Lee and A. Elgammal<br \/>\n\u201cNonlinear Shape and Appearance Models for Facial Expressions\u201d<br \/>\nICPR&#8217;06<\/p>\n<p>&nbsp;<\/p>\n<p>C.-S. Lee and A. Elgammal<br \/>\n\u201cSimultaneous Inferring View and Body Pose Using Torus Manifolds\u201d<br \/>\nICPR&#8217;06<\/p>\n<p>&nbsp;<\/p>\n<p>C.-S. Lee, A. Elgammal and Dimitris Metaxas,<br \/>\n\u201cSynthesis and Control of High Resolution Facial Expressions for Visual Interactions\u201d<br \/>\nICME&#8217;06<\/p>\n<p>&nbsp;<\/p>\n<p>C.-S. Lee and A. Elgammal<br \/>\n\u201cCarrying Object Detection Using Pose Preserving Dynamic Shape Model\u201d<br \/>\nAMDO&#8217;06 &#8211; LNCS 4069<\/p>\n<p>&nbsp;<\/p>\n<p>C.-S. Lee and A. Elgammal<br \/>\n\u201cHuman Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation\u201d<br \/>\nAMDO&#8217;06 &#8211; LNCS 4069<\/p>\n<p>&nbsp;<\/p>\n<p>Z. Zhao, A. Vashist, A. Elgammal, I. Muchnik and C. Kulikowski<br \/>\n\u201cDiscriminative Part Selection using Combinatorial and Statistical Models for Part-Based Object Recognition\u201d<br \/>\nBeyond Patches Workshop &#8211; with CVPR&#8217;06<\/p>\n<p>&nbsp;<\/p>\n<p>T. Parag, A. Elgammal and A. Mittal<br \/>\n\u201cA Framework for Feature Selection for Background Subtraction\u201d<br \/>\nCVPR&#8217;06<\/p>\n<p>&nbsp;<\/p>\n<p>R. Isukapalli, A. Elgammal and R. Greiner<br \/>\n\u201cLearning Efficient Multiclass Object Detection Hierarchy\u201d<br \/>\nECCV&#8217;06 &#8211; LNCS 3951 Vol. I<\/p>\n<p>&nbsp;<\/p>\n<p>N. Ravi, P.Shankar, A. Frankel, A. Elgammal and L. Iftode<br \/>\n\u201cIndoor Localization Using Camera Phones\u201d<br \/>\nThe 7th IEEE Workshop on Mobile Computing Systems and Applications, WMCSA 2006, April 2006.<\/p>\n<p>&nbsp;<\/p>\n<p>C.-S. Lee and A. Elgammal<br \/>\n\u201cGait Tracking and Recognition using Person-Dependent Dynamic Shape Models\u201d<br \/>\nFGR&#8217;06<\/p>\n<p>&nbsp;<\/p>\n<p>R. Isukapalli, A. Elgammal and R. Greiner<br \/>\n\u201cLearning to Identify Facial Expression During Detection using Markov Decision Process\u201d,<br \/>\nFGR&#8217;06<\/p>\n<p>&nbsp;<\/p>\n<p>I. Awasthi and A. Elgammal<br \/>\n\u201cLearning Nonlinear Manifolds of Dynamic Textures\u201d<br \/>\nInternational Conference on Computer Vision Theory and Applications VISAPP&#8217;06<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<p>&nbsp;<\/p>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"15\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2005<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>C.-S. Lee and A. Elgammal<br \/>\n\u201cHomeomorphic Manifold Analysis: Learning Decomposable Generative Models for Human Motion Analysis\u201d<br \/>\nWorkshop on Dynamic Vision (WDV05), with ICCV&#8217;05<\/p>\n<p>&nbsp;<\/p>\n<p>C.-S. Lee and A. Elgammal<br \/>\n\u201cFacial Expression Analysis using Nonlinear Decomposable Generative Models\u201d<br \/>\nIEEE International Workshop on Analysis and Modeling of Faces and Gestures (AMFG05) with ICCV&#8217;05.<\/p>\n<p>&nbsp;<\/p>\n<p>R. Isukapalli, A. Elgammal, and R. Greiner<br \/>\n\u201cLearning a Dynamic Classification Method to Detect Faces and Identify Facial Expression\u201d<br \/>\nIEEE International Workshop on Analysis and Modeling of Faces and Gestures (AMFG05) with ICCV&#8217;05.<\/p>\n<p>C.-S. Lee and A. Elgammal<br \/>\n\u201cStyle Adaptive Bayesian Tracking Using Explicit Manifold Learning\u201d<br \/>\nBMVC&#8217;05<\/p>\n<p>C.-S. Lee and A. Elgammal<br \/>\n\u201cTowards Scalable View-Invariant Gait Recognition: Multilinear Analysis for Gait\u201d<br \/>\nAudio- and Video-based Biometric Person Authentication Conference AVBPA&#8217;05<\/p>\n<p>A. Elgammal<br \/>\n\u201cLearning to Track: Conceptual Manifold Map for Closed-form Tracking\u201d<br \/>\nCVPR&#8217;05<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<p>&nbsp;<\/p>\n<\/div>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"16\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2004<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>V. Shet, V. Shiv Naga Prasad, A. Elgammal, L. S. Davis, Y. Yacoob<br \/>\n\u201cMulti-Cue Exemplar-Based Nonparametric Model for Gesture Recognition\u201d,<br \/>\n4th Indian Conference on Computer Vision, Graphics and Image Processing ICVGIP04<br \/>\n<em><strong>Recipient of Honorary Mention for Best Paper Award.<\/strong><\/em><\/p>\n<p>&nbsp;<\/p>\n<p>Y. Wang, X. Huang, C.-S. Lee, S. Zhang, Z. Li, D. Samaras, D. Metaxas, A. Elgammal, P. Huang<br \/>\n\u201cHigh Resolution Acquisition, Learning and Transfer of Dynamic 3D Facial Expressions\u201d<br \/>\nEurographics 2004<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal, C.-S. Lee<br \/>\n\u201cSeparating Style and Content on a Nonlinear Manifold\u201d<br \/>\nCVPR&#8217;04<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal, C.-S. Lee<br \/>\n\u201cInferring 3D Body Pose from Silhouettes using Activity Manifold Learning\u201d<br \/>\nCVPR&#8217;04<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal<br \/>\n\u201cNonlinear Generative Models for Dynamic Shape and Dynamic Appearance\u201d<br \/>\n2nd International Workshop on Generative-Model based vision. GMBV 2004, with CVPR&#8217;04.<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal, C.-S. Lee<br \/>\n\u201cGait Style and Gait Content: Bilinear Model for Gait Recognition Using Gait Re-sampling\u201d<br \/>\nFGR&#8217;04<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"17\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2003<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>A. Elgammal, R. Duraiswami and L. S. Davis<br \/>\n\u201cEfficient Kernel Density Estimation Using the Fast Gauss Transform for Computer Vision\u201d<br \/>\nIEEE Transactions on Pattern Analysis and Machine Intelligence. Dec 2003.<\/p>\n<p>&nbsp;<\/p>\n<p>S.-N. Lim, A. Elgammal, L. S. Davis<br \/>\n\u201cA Scalable Image-Based Multi-Camera Visual Surveillance System\u201d<br \/>\nIEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS 2003)<\/p>\n<p>&nbsp;<\/p>\n<p>S.-N. Lim, A. Elgammal, L. S. Davis<br \/>\n\u201cImage-based Pan-Tilt Camera Control in a Multi-Camera Surveillance Environment\u201d<br \/>\nICME&#8217;03<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal, V. Shet, Y. Yacoob, and L. S. Davis<br \/>\n\u201cLearning Dynamics for Exemplar-based Gesture Recognition\u201d<br \/>\nCVPR&#8217;03<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal, R. Duraiswami, and L. S. Davis<br \/>\n\u201cProbabilistic Tracking in Joint Feature-Spatial Spaces\u201d<br \/>\nCVPR&#8217;03<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"18\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2002<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>A. Elgammal, R. Duraiswami, D. Harwood and L. S. Davis<br \/>\n\u201cBackground and Foreground Modeling using Non-parametric Kernel Density Estimation for Visual Surveillance\u201d,<br \/>\nProceedings of the IEEE, July 2002.<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal, V. Shet, Y. Yacoob, L. S. Davis<br \/>\n\u201cGesture Recognition using a Probabilistic Framework for Pose Matching\u201d<br \/>\nThe Seventh International Conference on Control, Automation, Robotics and Vision, ICARCV&#8217;02<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"19\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2001<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>A. Elgammal, R. Duraiswami and L. S. Davis<br \/>\n\u201cEfficient Non-parametric Adaptive Color Modeling Using Fast Gauss Transform\u201d<br \/>\nCVPR&#8217;01<\/p>\n<p>A. Elgammal and L. S. Davis<br \/>\n\u201cProbabilistic Framework for Segmenting People Under Occlusion\u201d<br \/>\nICCV&#8217;01<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal, R. Duraiswami and L. S. Davis<br \/>\n\u201cEfficient Computation of Kernel Density Estimation using Fast Gauss Transform with Applications for Segmentation and Tracking\u201d<br \/>\nSecond International Workshop on Statistical and Computational Theories of Vision, &#8212; with ICCV&#8217;01<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal and M. A. Ismail<br \/>\n\u201cTechniques for Language Identification for Hybrid Arabic-English Document Images\u201d<br \/>\nSixth International Conference on Document Analysis and Recognition (ICDAR&#8217;01)<\/p>\n<p>&nbsp;<\/p>\n<p>A. Elgammal and M. A. Ismail<br \/>\n\u201cA Graph-Based Segmentation and Feature-extraction Framework for Arabic Text Recognition\u201d<br \/>\nSixth International Conference on Document Analysis and Recognition (ICDAR&#8217;01)<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"20\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>2000<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>A. Elgammal, D. Harwood, L. S. Davis<br \/>\n\u201cNon-parametric Model for Background Subtraction\u201d<br \/>\nECCV&#8217;00<\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item\" data-sorting-index=\"21\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>1999<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>M. Abdel-Moteleb and A. Elgammal<br \/>\n\u201cFace Detection in Complex Environment from Color Images\u201d,<br \/>\nICIP&#8217;99<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"\"><\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"sixteen columns no-float s-repeatable-item s-last-row\" data-sorting-index=\"22\">\n<div class=\"s-mhi\">\n<div class=\"s-item-text-group \">\n<div class=\"s-item-title\">\n<div class=\"s-component s-text\">\n<p><strong>Dissertations<\/strong><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<div class=\"s-item-text\">\n<div class=\"s-component s-text\">\n<div class=\"s-component-content s-font-body\">\n<p>Ph. D. dissertation, \u201cEfficient Nonparametric Kernel Density Estimation for Real-time Computer Vision\u201d, Department of computer science, University of Maryland, College Park, 2002. [PDF]<\/p>\n<p>M.Sc. thesis \u201cBilingual (Arabic\/Latin) Document Image analysis with Font independent Arabic Character Recognition\u201d, Faculty of Engineering, University of Alexandria. July 1996.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><a class=\"btn btn-primary\" href=\"\/contact\/\" target=\"_self\" rel=\"noopener noreferrer\">Contact Me<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Peer-reviewed publications (Featured) &nbsp; 2024 &nbsp; Kunpeng Song, Tingbo Hou, Zecheng He, Haoyu Ma, Jialiang Wang, Animesh Sinha, Sam Tsai, Yaqiao Luo, Xiaoliang Dai, Li Chen, Xide Xia, Peizhao Zhang, &hellip; <a href=\"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/publications\/\" class=\"\">Read More<\/a><\/p>\n","protected":false},"author":21,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"footnotes":""},"class_list":["post-363","page","type-page","status-publish","hentry"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v23.5 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Research Papers - Ahmed Elgammal<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/publications\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Research Papers - Ahmed Elgammal\" \/>\n<meta property=\"og:description\" content=\"Peer-reviewed publications (Featured) &nbsp; 2024 &nbsp; Kunpeng Song, Tingbo Hou, Zecheng He, Haoyu Ma, Jialiang Wang, Animesh Sinha, Sam Tsai, Yaqiao Luo, Xiaoliang Dai, Li Chen, Xide Xia, Peizhao Zhang, &hellip; Read More\" \/>\n<meta property=\"og:url\" content=\"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/publications\/\" \/>\n<meta property=\"og:site_name\" content=\"Ahmed Elgammal\" \/>\n<meta property=\"article:modified_time\" content=\"2025-07-07T23:31:04+00:00\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"16 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/publications\/\",\"url\":\"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/publications\/\",\"name\":\"Research Papers - Ahmed Elgammal\",\"isPartOf\":{\"@id\":\"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/#website\"},\"datePublished\":\"2020-07-09T16:51:59+00:00\",\"dateModified\":\"2025-07-07T23:31:04+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/publications\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/publications\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/publications\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Research Papers\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/#website\",\"url\":\"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/\",\"name\":\"Ahmed Elgammal\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Research Papers - Ahmed Elgammal","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/publications\/","og_locale":"en_US","og_type":"article","og_title":"Research Papers - Ahmed Elgammal","og_description":"Peer-reviewed publications (Featured) &nbsp; 2024 &nbsp; Kunpeng Song, Tingbo Hou, Zecheng He, Haoyu Ma, Jialiang Wang, Animesh Sinha, Sam Tsai, Yaqiao Luo, Xiaoliang Dai, Li Chen, Xide Xia, Peizhao Zhang, &hellip; Read More","og_url":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/publications\/","og_site_name":"Ahmed Elgammal","article_modified_time":"2025-07-07T23:31:04+00:00","twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"16 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/publications\/","url":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/publications\/","name":"Research Papers - Ahmed Elgammal","isPartOf":{"@id":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/#website"},"datePublished":"2020-07-09T16:51:59+00:00","dateModified":"2025-07-07T23:31:04+00:00","breadcrumb":{"@id":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/publications\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/sites.rutgers.edu\/ahmed-elgammal\/publications\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/publications\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/"},{"@type":"ListItem","position":2,"name":"Research Papers"}]},{"@type":"WebSite","@id":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/#website","url":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/","name":"Ahmed Elgammal","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"}]}},"_links":{"self":[{"href":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/wp-json\/wp\/v2\/pages\/363"}],"collection":[{"href":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/wp-json\/wp\/v2\/users\/21"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/wp-json\/wp\/v2\/comments?post=363"}],"version-history":[{"count":12,"href":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/wp-json\/wp\/v2\/pages\/363\/revisions"}],"predecessor-version":[{"id":544,"href":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/wp-json\/wp\/v2\/pages\/363\/revisions\/544"}],"wp:attachment":[{"href":"https:\/\/sites.rutgers.edu\/ahmed-elgammal\/wp-json\/wp\/v2\/media?parent=363"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}