Contribute to a growing python package for time series machine learning

Image by geralt at pixabay

Sktime is a popular new python package for time series machine learning. The contributors continue to fix bugs and add new features — and invite you to contribute too!

Why contribute to sktime?

  1. Improve your skills in machine learning and coding.
  2. Learn the nuts-and-bolts of machine learning algorithms.
  3. Build your resume.
  4. Give back to the open-source community. Many common machine learning tools are also open-source.

The community is particularly motivated to support new and/or anxious contributors. People who are looking to learn and develop their skills are welcomed and supported.

sktime Dev Days

Community members of all experience levels are invited to the…


Thoughts and Theory

The FASTEST state-of-the-art algorithm for series classification with Python

By Joe Schneid at Wikimedia Commons

Most state-of-the-art (SOTA) time series classification methods are limited by high computational complexity. This makes them slow to train on smaller datasets and effectively unusable on large datasets.

Recently, ROCKET (RandOM Convolutional KErnel Transform) has achieved SOTA of accuracy in just a fraction of the time as other SOTA time series classifiers. ROCKET transforms time series into features using random convolutional kernels and passes the features to a linear classifier.

MiniRocket is even faster!

MiniRocket (MINImally RandOm Convolutional KErnel Transform) is a (nearly) deterministic reformulation of Rocket that is 75 times faster on larger datasets and boasts roughly equivalent accuracy.


Advice from a Top Writer in Artificial Intelligence

So you want to write a widely read article about Data Science / Machine Learning / Artificial Intelligence?

In May 2021, I was recognized as a top writer in AI and was among the top 1000 writers in the Medium Partner Program. My older articles still continue to receive views and often appear in Google searches. (Scroll to the bottom for a screenshot of my stats).

Read along to learn some of the keys to my success.

I earned this badge in May 2021 once I started tagging my articles with “Artificial Intelligence”.

Selecting Topics

I first learned about Medium as a data scientist searching for specific topics in Data Science. …


Machine Learning

Considerations for Anomaly Detection Machine Learning Tasks

Image by Hans at Pixabay

Outlier detection is a machine learning task that aims to identify rare items, events, or observations that deviate from the “norm” or general distribution of the given data.

An anomaly is something that arouses suspicion that it was generated by different data generating mechanism

The Outlier Detection Machine Learning Task

In the outlier detection task, the goal is train an unsupervised model to find anomalies subject to two constraints:

  1. Minimize false negatives (aka catch as many anomalies as possible).
  2. Minimize false positives (aka when an anomaly is flagged, don’t be wrong).

In many applications, there is a third constraint: the “ground truth” of what are true…


This is an exciting development! This will certainly be useful for many practitioners.


The scikit-learn for outlier detection machine learning tasks

Photo by Anita Ritenour at flickr

PyOD is a Python library with a comprehensive set of scalable, state-of-the-art (SOTA) algorithms for detecting outlying data points in multivariate data. This task is commonly referred to as Outlier Detection or Anomaly Detection.

The outlier detection task aims to identify rare items, events, or observations that deviate from the “norm” or general distribution of the given data.

My favorite definition: An anomaly is something that arouses suspicion that it was generated by different data generating mechanism

Common applications of outlier detection include fraud detection, data error detection, intrusion detection in network security, and fault detection in mechanics.

Why Specific Algorithms for Anomaly Detection?

Practically speaking…


A python-based, fast, parameter-free, and highly interpretable unsupervised anomaly detection method

Image by Marco Verch Professional Photographer

Outliers, or anomalies are data points that deviate from the norm of a dataset. They arouse suspicion that they were generated by a different mechanism.

Anomaly detection is (usually) an unsupervised learning task where the objective is to identify suspicious observations in data. The task is constrained by the cost of incorrectly flagging normal points as anomalous and failing to flag actual anomalous points.

Applications of anomaly detection include network intrusion detection, data quality monitoring, and price arbitrage in financial markets.

Copula-Based Outlier Detection — COPOD — is a new algorithm for anomaly detection. …


Make your machine learning system resilient to changes in the world

Photo by Keng Ling on Unsplash

The world is inherently dynamic and nonstationary — constantly changing.

It is common for the performance of machine learning models to decline over time. This occurs as data distributions and target labels (“ground truth”) evolve. This is especially true for models related to people.

Thus, an essential component of machine learning systems is monitoring and adapting to such changes.

In this article, I will introduce this idea of concept drift or regime change and then discuss three ways to handle it and what you should consider.

New tools for model monitoring are emerging, but it is still important to understand…


Hands-on Tutorials

How to cluster time series in python — faster and more flexibly than k-means!

Source: Wikimedia Commons

Clustering is an unsupervised learning task where an algorithm groups similar data points without any “ground truth” labels. Clustering different time series into similar groups is a challenging because each data point is an ordered sequence.

In a previous article, I explained how the k-means clustering algorithm can be adapted to time series by using Dynamic Time Warping, which measures the similarity between two sequences, in place of standard measures like Euclidean distance.

Unfortunately, the k-means clustering algorithm for time series can be very slow!

Hierarchical clustering is faster than k-means because it operates on a matrix of pairwise distances…


Data Science, Machine Learning

State-of-the-art algorithm for time series classification with python

Image by OpenClipart-Vectors at pixabay

“The task of time series classification can be thought of as involving learning or detecting signals or patterns within time series associated with relevant classes.” — Dempster, et al 2020, authors of ROCKET paper

Most time series classification methods with state-of-the-art (SOTA) accuracy have high computational complexity and scale poorly. This means they are slow to train on smaller datasets and effectively unusable on large datasets.

ROCKET (RandOM Convolutional KErnal Transform) can achieve the same level of accuracy in just a fraction of the time as competing SOTA algorithms, including convolutional neural networks. …

Alexandra Amidon

Data scientist working in the financial services industry

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store