I wrote a Jupyter notebook that demonstrates how you can easily use pandas together with pygal. Because it won’t work here on my blogging system, you have to read about it on GitHub: Jupyter notebook HTML export (fully interactive version)
Here is a short demo that I show regularly in my talks about Software Analytics. It shows how you can identify components in your code that are so old that probably nobody knows anything about it anymore.
In my talks, I’m using a short example that illustrates how the mechanics of my notebook-driven approach for analyzing software systems works. Here you can now find another example on my blog, too…
In my previous blog post, we’ve seen how we can identify files that change together in one commit.
In this blog post, we take the analysis to an advanced level…
In his book Software Design X-Ray, Adam Tornhill shows a nice metric to find out if some parts of your code are coupled regarding their conjoint changes: Temporal Coupling.
In this and the next blog posts, I’m playing around with Adam’s ideas (and more) to find hidden dependencies of code parts based on version control data…
When you come across my blog, you may be thinking “What is this guy doing”? There are some weird analyses that have somehow to do with software development. But why all this?
Here are some queries that I use regulary at meetups and conferences for showing some features of jQAssistant and Neo4j.
Here is a short video that demonstrates how you can get some insights from the history of a Git repository using Jupyter Notebook, Python, pandas and matplotlib: We take a look at exporting the necessary data reading in the dataset
In this blog post, I want to show you a nice complexity metric that works for most major programming languages that we use for our software systems – the indentation-based complexity metric.
This notebook is a simple mini-tutorial to introduce you to basic functions of Jupyter, Python, Pandas and matplotlib with the aim of analyzing software data. Therefore, the example is chosen in such a way that we come across the typical methods in a data analysis.