publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
2025
- TM-BLearning heuristics for transit network design and improvement with deep reinforcement learningAndrew Holliday, Ahmed El-Geneidy, and Gregory DudekTransportmetrica B: Transport Dynamics, 2025
@article{Holliday31122025, author = {Holliday, Andrew and El-Geneidy, Ahmed and Dudek, Gregory}, title = {Learning heuristics for transit network design and improvement with deep reinforcement learning}, journal = {Transportmetrica B: Transport Dynamics}, volume = {13}, number = {1}, pages = {2561863}, year = {2025}, publisher = {Taylor \& Francis}, doi = {10.1080/21680566.2025.2561863}, url = {https://doi.org/10.1080/21680566.2025.2561863}, eprint = {https://doi.org/10.1080/21680566.2025.2561863}, }
2024
- ICRAA Neural-Evolutionary Algorithm for Autonomous Transit Network DesignAndrew Holliday, and Gregory DudekIn 2024 IEEE International Conference on Robotics and Automation, 2024
Planning a public transit network is a challenging optimization problem, but essential in order to realize the benefits of autonomous buses. We propose a novel algorithm for planning networks of routes for autonomous buses. We first train a graph neural net model as a policy for constructing route networks, and then use the policy as one of several mutation operators in a evolutionary algorithm. We evaluate this algorithm on a standard set of benchmarks for transit network design, and find that it outperforms the learned policy alone by up to 20% and a plain evolutionary algorithm approach by up to 53% on realistic benchmark instances.
@inproceedings{10611313, author = {Holliday, Andrew and Dudek, Gregory}, booktitle = {2024 IEEE International Conference on Robotics and Automation}, title = {A Neural-Evolutionary Algorithm for Autonomous Transit Network Design}, year = {2024}, volume = {}, number = {}, pages = {4457-4464}, keywords = {Evolutionary computation;Benchmark testing;Graph neural networks;Planning;Robotics and automation;Standards;Optimization}, doi = {10.1109/ICRA57147.2024.10611313} } - ICRAUncertainty-aware hybrid paradigm of nonlinear MPC and model-based RL for offroad navigation: Exploration of transformers in the predictive modelFaraz Lotfi, Khalil Virji, Farnoosh Faraji, and 4 more authorsIn 2024 IEEE International Conference on Robotics and Automation, 2024
In this paper, we investigate a hybrid scheme that combines nonlinear model predictive control (MPC) and model-based reinforcement learning (RL) for navigation planning of an autonomous model car across offroad, unstructured terrains without relying on predefined maps. Our innovative approach takes inspiration from BADGR, an LSTM-based network that primarily concentrates on environment modeling, but distinguishes itself by substituting LSTM modules with transformers to greatly elevate the performance of our model. Addressing uncertainty within the system, we train an ensemble of predictive models and estimate the mutual information between model weights and outputs, facilitating dynamic horizon planning through the introduction of variable speeds. Further enhancing our methodology, we incorporate a nonlinear MPC controller that accounts for the intricacies of the vehicle’s model and states. The model-based RL facet produces steering angles and quantifies inherent uncertainty. At the same time, the nonlinear MPC suggests optimal throttle settings, striking a balance between goal attainment speed and managing model uncertainty influenced by velocity. In the conducted studies, our approach excels over the existing baseline by consistently achieving higher metric values in predicting future events and seamlessly integrating the vehicle’s kinematic model for enhanced decision-making.
@inproceedings{10610452, author = {Lotfi, Faraz and Virji, Khalil and Faraji, Farnoosh and Berry, Lucas and Holliday, Andrew and Meger, David and Dudek, Gregory}, booktitle = {2024 IEEE International Conference on Robotics and Automation}, title = {Uncertainty-aware hybrid paradigm of nonlinear MPC and model-based RL for offroad navigation: Exploration of transformers in the predictive model}, year = {2024}, volume = {}, number = {}, pages = {2925-2931}, keywords = {Uncertainty;Navigation;Reinforcement learning;Predictive models;Transformers;Planning;Trajectory;Model-based RL;transformers;nonlinear MPC;uncertainty-aware planning;offroad navigation}, doi = {10.1109/ICRA57147.2024.10610452} }
2023
- ITSCAugmenting Transit Network Design Algorithms with Deep LearningAndrew Holliday, and Gregory DudekIn 2023 IEEE 26th International Conference on Intelligent Transportation Systems, 2023
This paper considers the use of deep learning models to enhance optimization algorithms for transit network design. Transit network design is the problem of determining routes for transit vehicles that minimize travel time and operating costs, while achieving full service coverage. State-of-the-art meta-heuristic search algorithms give good results on this problem, but can be very time-consuming. In contrast, neural networks can learn sub-optimal but fast-to-compute heuristics based on large amounts of data. Combining these approaches, we develop a fast graph neural network model for transit planning, and use it to initialize state-of-the-art search algorithms. We show that this combination can improve the results of these algorithms on a variety of metrics by up to 17%, without increasing their run time; or they can match the quality of the original algorithms while reducing the computing time by up to a factor of 50.
@inproceedings{10422363, author = {Holliday, Andrew and Dudek, Gregory}, booktitle = {2023 IEEE 26th International Conference on Intelligent Transportation Systems}, title = {Augmenting Transit Network Design Algorithms with Deep Learning}, year = {2023}, volume = {}, number = {}, pages = {2343-2350}, keywords = {Deep learning;Costs;Heuristic algorithms;Metaheuristics;Urban areas;Search problems;Planning}, doi = {10.1109/ITSC57777.2023.10422363} }
2021
- ARScale-invariant localization using quasi-semantic object landmarksAndrew Holliday, and Gregory DudekAutonomous Robots, 2021
This work presents Object Landmarks, a new type of visual feature designed for visual localization over major changes in distance and scale. An Object Landmark consists of a bounding box b defining an object, a descriptor q of that object produced by a Convolutional Neural Network, and a set of classical point features within b. We evaluate Object Landmarks on visual odometry and place-recognition tasks, and compare them against several modern approaches. We find that Object Landmarks enable superior localization over major scale changes, reducing error by as much as 18% and increasing robustness to failure by as much as 80% versus the state-of-the-art. They allow localization under scale change factors up to 6, where state-of-the-art approaches break down at factors of 3 or more.
@article{holliday2021scale, title = {Scale-invariant localization using quasi-semantic object landmarks}, author = {Holliday, Andrew and Dudek, Gregory}, journal = {Autonomous Robots}, volume = {45}, number = {3}, pages = {407--420}, year = {2021}, publisher = {Springer}, }
2020
- CRVPre-trained CNNs as Visual Feature Extractors: A Broad EvaluationAndrew Holliday, and Gregory DudekIn 2020 17th Conference on Computer and Robot Vision, 2020
In this work, we perform a wide-ranging evaluation of Convolutional Neural Networks (CNNs) as feature extractors for matching visual features under large changes in appearance, perspective, and visual scale. Our evaluation covers 82 different layers from twelve different CNN architectures belonging to four families: AlexNets, VGG Nets, ResNets, and DenseNets. To our knowledge, this is the most comprehensive analysis of its kind in the literature. We find that the intermediate layers of DenseNets serve as the best feature extractors overall, providing the best overall trade-off of robustness to feature size. Moreover, we find that for each network, the later-intermediate layers provide the best performance, regardless of the total number of layers in the network.
@inproceedings{9108679, author = {Holliday, Andrew and Dudek, Gregory}, booktitle = {2020 17th Conference on Computer and Robot Vision}, title = {Pre-trained CNNs as Visual Feature Extractors: A Broad Evaluation}, year = {2020}, volume = {}, number = {}, pages = {78-84}, keywords = {Visualization;Computer architecture;Network architecture;Feature extraction;Robustness;Convolutional neural networks;Task analysis}, doi = {10.1109/CRV50864.2020.00019} }
2018
- CRVGaze Selection for Enhanced Visual Odometry During NavigationTravis Manderson, Andrew Holliday, and Gregory DudekIn 2018 15th Conference on Computer and Robot Vision, 2018
We present an approach to enhancing visual odometry and Simultaneous Localization and Mapping (SLAM) in the context of robot navigation by actively modulating the gaze direction to enhance the quality of the odometric estimates that are returned. We focus on two quality factors: i) stability of the visual features, and ii) consistency of the visual features with respect to robot motion and the associated correspondence between frames. We assume that local texture measures are associated with underlying scene content and thus with the quality of the visual features for the associated region of the scene. Based on this assumption, we train a machine-learning system to score different regions of an image based on their texture and then guide the robot’s gaze toward high scoring image regions. Our work is targeted towards motion estimation and SLAM for small, lightweight, and autonomous air vehicles where computational resources are constrained in weight, size, and power. However, we believe that our work is also applicable to other types of robotic systems. Our experimental validation consists of simulations, constrained tests, and outdoor flight experiments on an unmanned aerial vehicle. We find that modulating gaze direction can improve localization accuracy by up to 62 percent.
@inproceedings{8575743, author = {Manderson, Travis and Holliday, Andrew and Dudek, Gregory}, booktitle = {2018 15th Conference on Computer and Robot Vision}, title = {Gaze Selection for Enhanced Visual Odometry During Navigation}, year = {2018}, volume = {}, number = {}, pages = {110-117}, keywords = {Cameras;Simultaneous localization and mapping;Visual odometry;Visualization;Real-time systems;Motion estimation;active sensing;robotics;vision;SLAM}, doi = {10.1109/CRV.2018.00025} } - IROSScale-Robust Localization Using General Object LandmarksAndrew Holliday, and Gregory DudekIn 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018
Visual localization under large changes in scale is an important capability in many robotic mapping applications, such as localizing at low altitudes in maps built at high altitudes, or performing loop closure over long distances. Existing approaches, however, are robust only up to about a 3× difference in scale between map and query images. We propose a novel combination of deep-learning-based object features and state-of-the-art SIFT point-features that yields improved robustness to scale change. This technique is training-free and class-agnostic, and in principle can be deployed in any environment out-of-the-box. We evaluate the proposed technique on the KITTI Odometry benchmark and on a novel dataset of outdoor images exhibiting changes in visual scale of 7× and greater, which we have released to the public. Our technique consistently outperforms localization using either SIFT features or the proposed object features alone, achieving both greater accuracy and much lower failure rates under large changes in scale.
@inproceedings{8594011, author = {Holliday, Andrew and Dudek, Gregory}, booktitle = {2018 IEEE/RSJ International Conference on Intelligent Robots and Systems}, title = {Scale-Robust Localization Using General Object Landmarks}, year = {2018}, volume = {}, number = {}, pages = {1688-1694}, keywords = {Visualization;Measurement;Simultaneous localization and mapping;Robustness;Databases;Search problems}, doi = {10.1109/IROS.2018.8594011} }
2017
- CVIUSpeedup of deep learning ensembles for semantic segmentation using a model compression techniqueAndrew Holliday, Mohammadamin Barekatain, Johannes Laurmaa, and 2 more authorsComputer Vision and Image Understanding, 2017Deep Learning for Computer Vision
Deep Learning (DL) has been proven as a powerful recognition method as evidenced by its success in recent computer vision competitions. The most accurate results have been obtained by ensembles of DL models that pool their results. However, such ensembles are computationally costly, making them inapplicable to real-time applications. In this paper, we apply model compression techniques to the problem of semantic segmentation, which is one of the most challenging problems in computer vision. Our results suggest that compressed models can approach the accuracy of full ensembles on this task, combining the diverse strengths of networks of very different architectures, while maintaining real-time performance.
@article{HOLLIDAY201716, title = {Speedup of deep learning ensembles for semantic segmentation using a model compression technique}, journal = {Computer Vision and Image Understanding}, volume = {164}, pages = {16-26}, year = {2017}, note = {Deep Learning for Computer Vision}, issn = {1077-3142}, doi = {https://doi.org/10.1016/j.cviu.2017.05.004}, url = {https://www.sciencedirect.com/science/article/pii/S1077314217300826}, author = {Holliday, Andrew and Barekatain, Mohammadamin and Laurmaa, Johannes and Kandaswamy, Chetak and Prendinger, Helmut}, keywords = {Semantic segmentation, Model compression, Transfer learning, Real-time application}, }