In the direction of dependable and versatile hyperparameter and blackbox optimization – Google AI Weblog



Google Vizier is the de-facto system for blackbox optimization over goal features and hyperparameters throughout Google, having serviced a few of Google’s largest analysis efforts and optimized a variety of merchandise (e.g., Search, Adverts, YouTube). For analysis, it has not solely diminished language mannequin latency for customers, designed laptop architectures, accelerated {hardware}, assisted protein discovery, and enhanced robotics, but in addition supplied a dependable backend interface for customers to seek for neural architectures and evolve reinforcement studying algorithms. To function on the scale of optimizing 1000’s of customers’ essential techniques and tuning hundreds of thousands of machine studying fashions, Google Vizier solved key design challenges in supporting numerous use instances and workflows, whereas remaining strongly fault-tolerant.

As we speak we’re excited to announce Open Supply (OSS) Vizier (with an accompanying techniques whitepaper printed at AutoML Convention 2022), a standalone Python package deal based mostly on Google Vizier. OSS Vizier is designed for 2 principal functions: (1) managing and optimizing experiments at scale in a dependable and distributed method for customers, and (2) growing and benchmarking algorithms for automated machine studying (AutoML) researchers.

System design

OSS Vizier works by having a server present providers, particularly the optimization of blackbox aims, or features, from a number of purchasers. In the primary workflow, a consumer sends a distant process name (RPC) and asks for a suggestion (i.e., a proposed enter for the consumer’s blackbox operate), from which the service begins to spawn a employee to launch an algorithm (i.e., a Pythia coverage) to compute the next ideas. The ideas are then evaluated by purchasers to type their corresponding goal values and measurements, that are despatched again to the service. This pipeline is repeated a number of occasions to type a whole tuning trajectory.

The usage of the ever present gRPC library, which is appropriate with most programming languages, similar to C++ and Rust, permits most flexibility and customization, the place the person may also write their very own customized purchasers and even algorithms exterior of the default Python interface. Because the whole course of is saved to an SQL datastore, a clean restoration is ensured after a crash, and utilization patterns may be saved as beneficial datasets for analysis into meta-learning and multitask transfer-learning strategies such because the OptFormer and HyperBO.

Within the distributed pipeline, a number of purchasers every ship a “Counsel” request to the Service API, which produces Options for the purchasers utilizing Pythia. The purchasers consider these ideas and return measurements. All transactions are saved to permit fault-tolerance.


Due to OSS Vizier’s emphasis as a service, by which purchasers can ship requests to the server at any time limit, it’s thus designed for a broad vary of situations — the price range of evaluations, or trials, can vary from tens to hundreds of thousands, and the analysis latency can vary from seconds to weeks. Evaluations may be completed asynchronously (e.g., tuning an ML mannequin) or in synchronous batches (e.g., moist lab settings involving a number of simultaneous experiments). Moreover, evaluations could fail attributable to transient errors and be retried, or could fail attributable to persistent errors (e.g., the analysis is unimaginable) and shouldn’t be retried.

This broadly helps quite a lot of functions, which embrace hyperparameter tuning deep studying fashions or optimizing non-computational aims, which may be e.g., bodily, chemical, organic, mechanical, and even human-evaluated, similar to cookie recipes.

The OSS Vizier API permits (1) builders to combine different packages, with PyGlove and Vertex Vizier already included, and (2) customers to optimize their experiments, similar to machine studying pipelines and cookie recipes.

Integrations, algorithms, and benchmarks

As Google Vizier is closely built-in with lots of Google’s inside frameworks and merchandise, OSS Vizier will naturally be closely built-in with lots of Google’s open supply and exterior frameworks. Most prominently, OSS Vizier will function a distributed backend for PyGlove to permit large-scale evolutionary searches over combinatorial primitives similar to neural architectures and reinforcement studying algorithms. Moreover, OSS Vizier shares the identical client-based API with Vertex Vizier, permitting customers to shortly swap between open-source and production-quality providers.

For AutoML researchers, OSS Vizier can also be outfitted with a helpful assortment of algorithms and benchmarks (i.e., goal features) unified underneath widespread APIs for assessing the strengths and weaknesses of proposed strategies. Most notably, by way of TensorFlow Likelihood, researchers can now use the JAX-based Gaussian Course of Bandit algorithm, based mostly on the default algorithm in Google Vizier that tunes inside customers’ aims.

Assets and future route

We offer hyperlinks to the codebase, documentation, and techniques whitepaper. We plan to permit person contributions, particularly within the type of algorithms and benchmarks, and additional combine with the open-source AutoML ecosystem. Going ahead, we hope to see OSS Vizier as a core software for increasing analysis and improvement over blackbox optimization and hyperparameter tuning.


OSS Vizier was developed by members of the Google Vizier group in collaboration with the TensorFlow Likelihood group: Setareh Ariafar, Lior Belenki, Emily Fertig, Daniel Golovin, Tzu-Kuo Huang, Greg Kochanski, Chansoo Lee, Sagi Perel, Adrian Reyes, Xingyou (Richard) Tune, and Richard Zhang.

As well as, we thank Srinivas Vasudevan, Jacob Burnim, Brian Patton, Ben Lee, Christopher Suter, and Rif A. Saurous for additional TensorFlow Likelihood integrations, Daiyi Peng and Yifeng Lu for PyGlove integrations, Hao Li for Vertex/Cloud integrations, Yingjie Miao for AutoRL integrations, Tom Hennigan, Varun Godbole, Pavel Sountsov, Alexey Volkov, Mihir Paradkar, Richard Belleville, Bu Su Kim, Vytenis Sakenas, Yujin Tang, Yingtao Tian, and Yutian Chen for open supply and infrastructure assist, and George Dahl, Aleksandra Faust, Claire Cui, and Zoubin Ghahramani for discussions.

Lastly we thank Tom Small for designing the animation for this publish.