There is no magic formula in data science that can tell us which type of machine learning algorithm will perform best for a specific use-case. Different models are better suited for different types of data and different problems. At best, you can follow some rough guide on how to approach problems with regard to which model to try on your data, but these are often more confusing than helpful. Best practices tell us to start with a simple model (e.g. linear regression) and build up to more complicated models (e.g. linear regression -> random forest -> multi-layer perceptron) if you are not satisfied with the results. Unfortunately, different models require different data cleaning steps, different type/amount of features, tuning a new set of hyperparameters, etc. Refactoring the code for this purpose can be quite boring and time-consuming. Because of this, many data scientists end up just using the model best known to them and fine-tuning this particular model without ever trying different ones. This can result in poor performance (because the model is just not the right one for the task) or in poor time management (because you could have achieved a similar performance with a simpler/faster model). ATOM is made to help you solve these issues. With just a few lines of code, you can perform basic data cleaning steps, select relevant features and compare the performance of multiple models on a given dataset. ATOM should be able to provide quick insights on which algorithms perform best for the task at hand and provide an indication of the feasibility of the ML solution. It is important to realize that ATOM is not here to replace all the work a data scientist has to do before getting his model into production. ATOM doesn't spit out production-ready models just by tuning some parameters in its API. After helping you to determine the right model, you will most probably need to fine-tune it using use-case specific features and data cleaning steps in order to achieve maximum performance. So, this sounds a bit like AutoML, how is ATOM different than auto-sklearn or TPOT? Well, ATOM does AutoML in the sense that it helps you find the best model for a specific task, but contrary to the aforementioned packages, it does not actively search for the best model. It simply runs all of them and let you pick the one that you think suites the task best. AutoML packages are often black boxes: if you provide data, it will magically return a working model. Although it works great, they often produce complicated pipelines with low explainability. This is hard to sell to the business. In this, ATOM excels. Every step of the pipeline is accounted for, and using the provided plotting methods, it’s easy to demonstrate why a model is a better or worse choice than the other.