NOTE NOTE: This is Part II of a blog post of how to build AppImage files for more realistic Python programs. This post assumes you either have read Part I or have enough understanding of the AppImage file format. If you are feel you are lost reading this post and haven’t read Part I, please consider reading it first as as primer.

Tackling the Complex Cow

Complex Cow

A more complex cow

In my last post, I covered how to handle the simplest possible case of a Python app into an AppImage. We are going to build on that to handle the case of a more complex application.

Manylinux and Docker

Luckily our fine Python community has implemented a PEP that provides a really useful tool for handling this type of issue. PEP 600 describes the manylinux Docker images for building Python package wheels. Building wheels is not our exact use case here, but since a wheel is also a software package, albeit Python only, you can see that manylinux can come to the rescue for helping us construct a more sophisticated AppImage file.

Let’s start by installing a manylinux Docker container. For my project I am going to choose the manylinux_2_28 Docker image.

Manylinux has several Docker images available with differing hardware and glibc support. Since my current projects are only currently interested in aarch64 and x86_64, I have the pick of the litter. The current naming scheme for the manylinux images encode the glibc version into the name. The name manylinux_2_28 indicates that the binaries in the image were compiled against glibc 2.28. You want to choose a glibc level less or equal to your target install platforms. For my purposes glibc 2.28 is a pretty safe choice.

We need to know what Docker repository to pull. If you are not familiar with the Docker ecosystem this might seem a little cryptic to you. The image repositories for manylinux_2_28 are listed at this link. We are going to pull the quay.io/pypa/manylinux_2_28_x86_64 repository.

Below is the output from pulling our manylinux container image:

dev@host:~$ docker pull quay.io/pypa/manylinux_2_28_x86_64
Using default tag: latest
latest: Pulling from pypa/manylinux_2_28_x86_64
09720f817e0c: Pull complete
2da756f29325: Pull complete
7e243e2e52d9: Pull complete
e9a89bd7d45e: Pull complete
97b85e4d96f2: Pull complete
eee16c856530: Pull complete
26590b30e40e: Pull complete
3d857074a168: Pull complete
b68000c24722: Pull complete
5bd8792c26be: Pull complete
3b62a3e7293c: Pull complete
c295786e836b: Pull complete
e42d82ebc133: Pull complete
fe8fb7fac4a7: Pull complete
eefbd81d4b4d: Pull complete
4f4fb700ef54: Pull complete
8bf6543131e3: Pull complete
67b5443d402d: Pull complete
f1fa436ddd46: Pull complete
89adc2f105e2: Pull complete
cb0fa6a1e599: Pull complete
Digest: sha256:7d89e036b9493f94a0bb252c1db4ab3f71f5b83874cb274f85b3c40be712f513
Status: Downloaded newer image for quay.io/pypa/manylinux_2_28_x86_64:latest
quay.io/pypa/manylinux_2_28_x86_64:latest
$

Now let’s take a look at this Docker image.

$ docker images
REPOSITORY                           TAG       IMAGE ID       CREATED        SIZE
quay.io/pypa/manylinux_2_28_x86_64   latest    01bf8a9e3a53   6 days ago     1.55GB
$

There it is! Just waiting for us to run it.

Let’s create a running BASH container of this image:

$ docker run -it 01bf8a9e3a53 /bin/bash
[root@b2203f6c0d63 /]#

We are in. Now let’s look to see what we have for CPython installations that can be executed by typing “python3” and then since we are using bash we can hit the TAB key to auto-complete to get suggestions.

[root@b2203f6c0d63 /]# python3
python3      python3.11   python3.13   python3.6    python3.8    
python3.10   python3.12   python3.13t  python3.7    python3.9    
[root@b2203f6c0d63 /]# python3

We can see here that we have CPython 3.6-3.13 versions to choose from. Let’s try the the python3.13 binary for a spin.

[root@b2203f6c0d63 /]# python3.13
Python 3.13.3 (main, Apr 19 2025, 05:04:48) [GCC 14.2.1 20250110 (Red Hat 14.2.1-7)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> print('Moo World!')
Moo World!
>>>

It works! We now have an environment to start creating a CPython runtime system for a more complex cow!

The More Complex Cow

In order to isolate our work we will create a python virtual environment to work in in this Docker container.

[root@b2203f6c0d63 /]# cd
[root@b2203f6c0d63 ~]# pwd
/root
[root@b2203f6c0d63 ~]# python3.13 -m venv comcow
[root@b2203f6c0d63 ~]# source comcow/bin/activate
(comcow) [root@b2203f6c0d63 ~]# python3
Python 3.13.3 (main, Apr 19 2025, 05:04:48) [GCC 14.2.1 20250110 (Red Hat 14.2.1-7)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
(comcow) [root@b2203f6c0d63 ~]#

Everyone loves the KISS philosophy. Let’s create a simple CPython program that imports the modules that are used in Muslce Buddy.

Source for comcow.py:

#!/usr/bin/env python3

import bisect
import collections
import copy
import datetime
import functools
import glob
import json
import kivy
import kivymd
import math
import os
import pathlib
import pickle
import platform
import psutil
import random
import re
import shutil
import signal
import sqlite3
import subprocess
import sys
import textwrap
import time
import tkinter
import uuid

print('Moo World!')

What this will do, is it will cause CPython to instantiate objects for each of these Python modules. If a module is missing we will exit with a “module not found” error exception:

(comcow) [root@b2203f6c0d63 ~]# ./comcow.py
Traceback (most recent call last):
  File "/root/./comcow.py", line 10, in <module>
    import kivy
ModuleNotFoundError: No module named 'kivy'
(comcow) [root@b2203f6c0d63 ~]#

To fix this we will install the module using pip.

(comcow) [root@b2203f6c0d63 ~]# pip install kivy
Collecting kivy
  Downloading Kivy-2.3.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (14 kB)
Collecting Kivy-Garden>=0.1.4 (from kivy)
  Downloading Kivy_Garden-0.1.5-py3-none-any.whl.metadata (159 bytes)
Collecting docutils (from kivy)
  Downloading docutils-0.21.2-py3-none-any.whl.metadata (2.8 kB)
Collecting pygments (from kivy)
  Downloading pygments-2.19.1-py3-none-any.whl.metadata (2.5 kB)
Collecting requests (from kivy)
  Downloading requests-2.32.3-py3-none-any.whl.metadata (4.6 kB)
Collecting filetype (from kivy)
  Downloading filetype-1.2.0-py2.py3-none-any.whl.metadata (6.5 kB)
Collecting charset-normalizer<4,>=2 (from requests->kivy)
  Downloading charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (35 kB)
Collecting idna<4,>=2.5 (from requests->kivy)
  Downloading idna-3.10-py3-none-any.whl.metadata (10 kB)
Collecting urllib3<3,>=1.21.1 (from requests->kivy)
  Downloading urllib3-2.4.0-py3-none-any.whl.metadata (6.5 kB)
Collecting certifi>=2017.4.17 (from requests->kivy)
  Downloading certifi-2025.1.31-py3-none-any.whl.metadata (2.5 kB)
Downloading Kivy-2.3.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (22.9 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 22.9/22.9 MB 2.1 MB/s eta 0:00:00
Downloading Kivy_Garden-0.1.5-py3-none-any.whl (4.6 kB)
Downloading docutils-0.21.2-py3-none-any.whl (587 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 587.4/587.4 kB 874.8 kB/s eta 0:00:00
Downloading filetype-1.2.0-py2.py3-none-any.whl (19 kB)
Downloading pygments-2.19.1-py3-none-any.whl (1.2 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 1.5 MB/s eta 0:00:00
Downloading requests-2.32.3-py3-none-any.whl (64 kB)
Downloading certifi-2025.1.31-py3-none-any.whl (166 kB)
Downloading charset_normalizer-3.4.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (144 kB)
Downloading idna-3.10-py3-none-any.whl (70 kB)
Downloading urllib3-2.4.0-py3-none-any.whl (128 kB)
Installing collected packages: filetype, urllib3, pygments, idna, docutils, charset-normalizer, certifi, requests, Kivy-Garden, kivy
Successfully installed Kivy-Garden-0.1.5 certifi-2025.1.31 charset-normalizer-3.4.1 docutils-0.21.2 filetype-1.2.0 idna-3.10 kivy-2.3.1 pygments-2.19.1 requests-2.32.3 urllib3-2.4.0
(comcow) [root@b2203f6c0d63 ~]#

So we can see here that the Kivy module installed and pip pulled and built seven additional modules that Kivy depends on.

We repeat this process until we have installed the following modules:

  • kivy
  • kivymd
  • psutil

Now our complex cow works:

(comcow) [root@b2203f6c0d63 ~]# ./comcow.py
[INFO   ] [Logger      ] Record log in /root/.kivy/logs/kivy_25-04-26_3.txt
[INFO   ] [Kivy        ] v2.3.1
[INFO   ] [Kivy        ] Installed at "/root/comcow/lib/python3.13/site-packages/kivy/__init__.py"
[INFO   ] [Python      ] v3.13.3 (main, Apr 19 2025, 05:04:48) [GCC 14.2.1 20250110 (Red Hat 14.2.1-7)]
[INFO   ] [Python      ] Interpreter at "/root/comcow/bin/python3"
[INFO   ] [Logger      ] Purge log fired. Processing...
[INFO   ] [Logger      ] Purge finished!
[INFO   ] [KivyMD      ] 1.2.0, git-Unknown, 2025-04-26 (installed at "/root/comcow/lib/python3.13/site-packages/kivymd/__init__.py")
[WARNING] [KivyMD      ] Version 1.2.0 is deprecated and is no longer supported. Use KivyMD version 2.0.0 from the master branch (pip install https://github.com/kivymd/KivyMD/archive/master.zip)
[INFO   ] [Factory     ] 195 symbols loaded
[INFO   ] [Image       ] Providers: img_tex, img_dds, img_sdl2, img_pil (img_ffpyplayer ignored)
[INFO   ] [Text        ] Provider: sdl2
Moo World!
(comcow) [root@b2203f6c0d63 ~]#

NOTE NOTE: I am not bothering to try to generate a Kivy GUI inside of the manylinux Docker container because the container does not have access to Linux’s DRI (Direct Render Interface). We don’t need to do that for this purpose.

Locating Our Components

Now everything our app needs exists in this running Docker container. We just need to copy the components out of the container and into place them in our AppDir.

WARNING WARNING: None of the files you create inside of this manylinux Docker container are persistent. You must copy your created files to you host environment before you exit your shell or they will be lost and need to be created again.

Let’s take a quick look at our virtual environment we named comcow:

(comcow) [root@b2203f6c0d63 ~]# cd comcow/
bin/        .gitignore  include/    lib/        lib64/      pyvenv.cfg  
(comcow) [root@b2203f6c0d63 ~]# cd comcow/lib
lib/   lib64/
(comcow) [root@b2203f6c0d63 ~]# cd comcow/lib/python3.13/site-packages/
(comcow) [root@b2203f6c0d63 site-packages]# ls -FC
certifi/			     kivymd/
certifi-2025.1.31.dist-info/	     kivymd-1.2.0.dist-info/
charset_normalizer/		     PIL/
charset_normalizer-3.4.1.dist-info/  pillow-11.2.1.dist-info/
docutils/			     pillow.libs/
docutils-0.21.2.dist-info/	     pip/
filetype/			     pip-25.0.1.dist-info/
filetype-1.2.0.dist-info/	     psutil/
garden/				     psutil-7.0.0.dist-info/
idna/				     pygments/
idna-3.10.dist-info/		     pygments-2.19.1.dist-info/
kivy/				     requests/
Kivy-2.3.1.dist-info/		     requests-2.32.3.dist-info/
Kivy_Garden-0.1.5.dist-info/	     urllib3/
Kivy.libs/			     urllib3-2.4.0.dist-info/
(comcow) [root@b2203f6c0d63 site-packages]#

There is all of our extra Python modules that were needed to run our test app. We need to copy this work to our AppDir workspace outside of the Docker container.

(comcow) [root@b2203f6c0d63 ~]# tar -cf comcow.venv.tar comcow comcow.py
(comcow) [root@b2203f6c0d63 ~]# ls -l
total 132440
drwxr-xr-x 5 root root      4096 Apr 25 23:28 comcow
-rwxr-xr-x 1 root root       407 Apr 26 00:37 comcow.py
-rw-r--r-- 1 root root 135608320 Apr 26 01:00 comcow.venv.tar
(comcow) [root@b2203f6c0d63 ~]# gzip comcow.venv.tar
(comcow) [root@b2203f6c0d63 ~]# ls -l
total 40692
drwxr-xr-x 5 root root     4096 Apr 25 23:28 comcow
-rwxr-xr-x 1 root root      407 Apr 26 00:37 comcow.py
-rw-r--r-- 1 root root 41659512 Apr 26 01:00 comcow.venv.tar.gz
(comcow) [root@b2203f6c0d63 ~]#

Now we have a tar archive of our virtual environment. We need to go back to our host system without closing our Docker container and perform a copy.

From outside the Docker container, we will use Docker’s ps command to get the container ID which we will use in a copy command.

dev@host:~$ cd AppImage/
dev@host:~/AppImage$ docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED       STATUS       PORTS     NAMES
b2203f6c0d63   01bf8a9e3a53   "manylinux-entrypoin…"   3 hours ago   Up 3 hours             flamboyant_johnson

The container ID is b2203f6c0d63. We now use that ID to copy our newly created venv file from the container into our host machines AppImage work area.

WARNING WARNING: You will need to check for a new container ID every time you restart a Docker container. The container ID is regenerated each time. In the course of creating this blog I started multiple Docker containers so you may notice that the container ID changes through the course of this document.

dev@host:~/AppImage$ docker cp b2203f6c0d63:/root/comcow.venv.tar.gz .
Successfully copied 41.7MB to /home/dev/AppImage/.
dev@host:~/AppImage$

Now we need to extract a Python runtime system. Manylinux puts the Python runtimes in the /opt directory. Lets locate the runtime system.

[root@96d0a1e00054 /]# cd /opt
[root@96d0a1e00054 opt]# ls -FC
_internal/  python/  rh/
[root@96d0a1e00054 opt]# cd python/
[root@96d0a1e00054 python]# ls -l
total 0
lrwxrwxrwx 1 root root 30 Apr 19 05:09 cp310-cp310 -> /opt/_internal/cpython-3.10.17
lrwxrwxrwx 1 root root 30 Apr 19 05:09 cp311-cp311 -> /opt/_internal/cpython-3.11.12
lrwxrwxrwx 1 root root 30 Apr 19 05:09 cp312-cp312 -> /opt/_internal/cpython-3.12.10
lrwxrwxrwx 1 root root 29 Apr 19 05:09 cp313-cp313 -> /opt/_internal/cpython-3.13.3
lrwxrwxrwx 1 root root 35 Apr 19 05:09 cp313-cp313t -> /opt/_internal/cpython-3.13.3-nogil
lrwxrwxrwx 1 root root 29 Apr 19 05:09 cp36-cp36m -> /opt/_internal/cpython-3.6.15
lrwxrwxrwx 1 root root 29 Apr 19 05:09 cp37-cp37m -> /opt/_internal/cpython-3.7.17
lrwxrwxrwx 1 root root 29 Apr 19 05:09 cp38-cp38 -> /opt/_internal/cpython-3.8.20
lrwxrwxrwx 1 root root 29 Apr 19 05:09 cp39-cp39 -> /opt/_internal/cpython-3.9.22
lrwxrwxrwx 1 root root 33 Apr 19 05:10 pp310-pypy310_pp73 -> /opt/_internal/pp310-pypy310_pp73
lrwxrwxrwx 1 root root 33 Apr 19 05:10 pp311-pypy311_pp73 -> /opt/_internal/pp311-pypy311_pp73
[root@96d0a1e00054 python]#

We have found our Python runtimes! Manylinux even includes a couple of version of the PyPy JIT as an alternative to the CPython runtime. The Kivy framework depends upon some CPython extensions so we can’t use PyPy in this case.

Remember which Python version we used to create out virtual environment earlier?

[root@b2203f6c0d63 ~]# python3.13 -m venv comcow

We will want to find the python3.13 version.

[root@96d0a1e00054 python]# ls -l *313*
lrwxrwxrwx 1 root root 29 Apr 19 05:09 cp313-cp313 -> /opt/_internal/cpython-3.13.3
lrwxrwxrwx 1 root root 35 Apr 19 05:09 cp313-cp313t -> /opt/_internal/cpython-3.13.3-nogil
[root@96d0a1e00054 python]#

There are two version of CPython 3.13 in our container. One which has a Global Interpreter Lock (GIL) and an experimental version with no GIL. Some day CPython will have no GIL by default, but today it is considered normal to use a GIL equipped CPython engine. When we created our virtual environment we used the regular GIL equipped engine. The noGIL version in this manylinux container is called python3.13t.

Lets take a look at the cp313-cp313 directory.

[root@96d0a1e00054 python]# cd /opt/_internal
[root@96d0a1e00054 _internal]# ls -FC
build_scripts/	  cpython-3.13.3-nogil/  pipx/
certs.pem@	  cpython-3.6.15/	 pp310-pypy310_pp73/
cpython-3.10.17/  cpython-3.7.17/	 pp311-pypy311_pp73/
cpython-3.11.12/  cpython-3.8.20/	 static-libs-for-embedding-only.tar.xz
cpython-3.12.10/  cpython-3.9.22/	 tools/
cpython-3.13.3/   mpdecimal-4/
[root@96d0a1e00054 _internal]# cd cpython-3.13.3
[root@96d0a1e00054 cpython-3.13.3]# ls -FC
bin/  include/	lib/  share/
[root@96d0a1e00054 cpython-3.13.3]#

Here we see our directory structure for our CPython installation. Feel free to explore with your favorite command line tools. Sadly tree is not present in this container, but du works.

Lets tar it up and copy it to our host system. You can add a v to your tar arguments if you want to see all the files. I didn’t do that here as I want to show you a more terse command line output for the sake of brevity.

[root@96d0a1e00054 ~]# tar -cf cp313.tar /opt/_internal/cpython-3.13.3
tar: Removing leading `/' from member names
tar: Removing leading `/' from hard link targets
[root@96d0a1e00054 ~]# gzip cp313.tar
[root@96d0a1e00054 ~]# ls -lh
total 17M
-rw-r--r-- 1 root root 17M May  1 20:57 cp313.tar.gz
[root@96d0a1e00054 ~]#

Now from our host system we perform the copy.

dev@host:~/AppImage$ docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED             STATUS             PORTS     NAMES
96d0a1e00054   01bf8a9e3a53   "manylinux-entrypoin…"   About an hour ago   Up About an hour             peaceful_bardeen
dev@host:~/AppImage$ docker cp 96d0a1e00054:/root/cp313.tar.gz .
Successfully copied 17MB to /home/dev/AppImage/.
dev@host:~/AppImage$ ls -l
total 57252
-rw-r--r-- 1 dev dev 41659512 Apr 26 01:00 comcow.venv.tar.gz
-rw-r--r-- 1 dev dev 16964709 May  1 20:57 cp313.tar.gz
dev@host:~/AppImage$

Do you remember in our Part I blog post we created an AppDir for our spherical cow app? We are going to reuse that for our more complex cow.

dev@host:~$ cd ~/AppImage/Moo_World.AppDir/
dev@host:~/AppImage/Moo_World.AppDir$ tree
.
├── AppRun
├── moo_world.desktop
├── moo_world.png -> Moo_World.png
├── Moo_World.png
└── usr
    ├── bin
    │   └── moo_world.py
    ├── lib
    └── share
        ├── applications
        └── icons

7 directories, 5 files
dev@host:~/AppImage/Moo_World.AppDir$

We have some choices where we can extract our CPython in this directory structure. It is up to you. I am going to put it in usr/share.

dev@host:~/AppImage/Moo_World.AppDir/usr/share$ ls -FC
applications/  icons/  opt/
dev@host:~/AppImage/Moo_World.AppDir/usr/share$

Woops. I forgot that I bundled the whole opt/_interal path. That is easy to fix.

dev@host:~/AppImage/Moo_World.AppDir/usr/share$ cd opt/_internal/
dev@host:~/AppImage/Moo_World.AppDir/usr/share/opt/_internal$ ls
cpython-3.13.3
dev@host:~/AppImage/Moo_World.AppDir/usr/share/opt/_internal$ mv cpython-3.13.3/ ../..
dev@host:~/AppImage/Moo_World.AppDir/usr/share/opt/_internal$ cd ../..
dev@host:~/AppImage/Moo_World.AppDir/usr/share$ ls -FC
applications/  cpython-3.13.3/  icons/  opt/
dev@host:~/AppImage/Moo_World.AppDir/usr/share$ rm -rf opt
dev@host:~/AppImage/Moo_World.AppDir/usr/share$ ls -FC
applications/  cpython-3.13.3/  icons/
dev@host:~/AppImage/Moo_World.AppDir/usr/share$

Now we need to extract our modules from our venv we created earlier into our AppDir. Again we have some choices where we put it. I decided the place it in usr/share again.

dev@host:~/AppImage/Moo_World.AppDir/usr/share$ tar -xzf ~/AppImage/comcow.venv.tar.gz
dev@host:~/AppImage/Moo_World.AppDir/usr/share$ ls -FC
applications/  comcow/  comcow.py*  cpython-3.13.3/  icons/
dev@host:~/AppImage/Moo_World.AppDir/usr/share$

We picked up comcow.py in this tar file as well. This is great as we can use it for testing our Python environment. We will just move it to usr/bin as a matter of convention.

dev@host:~/AppImage/Moo_World.AppDir/usr/share$ mv comcow.py ../bin
dev@host:~/AppImage/Moo_World.AppDir/usr/share$ ls -l ../bin
total 8
-rwxr-xr-x 1 dev dev 407 Apr 26 00:37 comcow.py
-rwxrwxr-x 1 dev dev  44 Apr 18 01:32 moo_world.py
dev@host:~/AppImage/Moo_World.AppDir/usr/share$

Setting up Our AppImage’s Paths

Now we need to setup our environment paths in our AppRun shell script to ensure that the correct Python engine is executed. To do this we will need to make sure LD_LIBRARY_PATH, PATH, and PYTHONPATH environment variables are set correctly before we execute our Python program.

We will want to put the path of the Python modules that we gathered from our manylinux Docker container at the front of our PYTHONPATH variable to make sure our CPython engine can locate it’s native and pip compiled modules.

Let’s start by adding PYTHONPATH to our AppRun file.

#!/usr/bin/env sh

HERE="$(dirname "$(readlink -f "${0}")")"
export PYTHONPATH="${HERE}/usr/share/comcow/lib/python3.13/site-packages:${HERE}/usr/share/cpython-3.13.3/lib/python3.13:${PYTHONPATH}"
export PATH="${HERE}/usr/share/cpython-3.13.3/bin:${HERE}/usr/bin:${PATH}"
export LD_LIBRARY_PATH="${HERE}/usr/lib:${LD_LIBRARY_PATH}"
EXEC="${HERE}/usr/bin/comcow.py"
exec "${EXEC}"

Here you can see we added our virtual environment Python modules from usr/share/comcow/lib/python3.13/site-packages and our native CPython modules from usr/share/cpython-3.13.3/lib/python3.13 to our PYTHONPATH. For the PATH we add the path to the CPython engine located at usr/share/cpython-3.13.3/bin so our Python program’s shebang can locate the runtime engine. And lastly we add usr/lib to our LD_LIBRARY_PATH so we can add any shared libraries we may need should we encounter a missing library issue. SPOILERS

Testing and Debugging Our AppDir

The moment of truth. We can now attempt to run our complex cow from our AppRun script! What could possibly go wrong? Right? Everything always works on the first try. Everyone knows that is why programming is so easy. New code always works on the first try. SARCASM

dev@host:~/AppImage/Moo_World.AppDir$ ./AppRun
[INFO   ] [Logger      ] Record log in /home/dev/.kivy/logs/kivy_25-05-07_0.txt
[INFO   ] [Kivy        ] v2.3.1
[INFO   ] [Kivy        ] Installed at "/home/dev/AppImage/Moo_World.AppDir/usr/share/comcow/lib/python3.13/site-packages/kivy/__init__.py"
[INFO   ] [Python      ] v3.13.3 (main, Apr 19 2025, 05:04:48) [GCC 14.2.1 20250110 (Red Hat 14.2.1-7)]
[INFO   ] [Python      ] Interpreter at "/home/dev/AppImage/Moo_World.AppDir/usr/share/cpython-3.13.3/bin/python3"
[INFO   ] [Logger      ] Purge log fired. Processing...
[INFO   ] [Logger      ] Purge finished!
[INFO   ] [KivyMD      ] 1.2.0, git-Unknown, 2025-04-26 (installed at "/home/dev/AppImage/Moo_World.AppDir/usr/share/comcow/lib/python3.13/site-packages/kivymd/__init__.py")
[WARNING] [KivyMD      ] Version 1.2.0 is deprecated and is no longer supported. Use KivyMD version 2.0.0 from the master branch (pip install https://github.com/kivymd/KivyMD/archive/master.zip)
[INFO   ] [Factory     ] 195 symbols loaded
[INFO   ] [Image       ] Providers: img_tex, img_dds, img_sdl2, img_pil (img_ffpyplayer ignored)
[INFO   ] [Text        ] Provider: sdl2
 Traceback (most recent call last):
   File "/home/dev/AppImage/Moo_World.AppDir/usr/bin/comcow.py", line 22, in <module>
     import sqlite3
   File "/home/dev/AppImage/Moo_World.AppDir/usr/share/cpython-3.13.3/lib/python3.13/sqlite3/__init__.py", line 57, in <module>
     from sqlite3.dbapi2 import *
   File "/home/dev/AppImage/Moo_World.AppDir/usr/share/cpython-3.13.3/lib/python3.13/sqlite3/dbapi2.py", line 27, in <module>
     from _sqlite3 import *
 ImportError: libsqlite3.so: cannot open shared object file: No such file or directory
dev@host:~/AppImage/Moo_World.AppDir$ ./AppRun

Well that is ugly. Actually it is a pretty straight forward error message. Even though sqlite3 is a standard Python module, CPython front ends the SQLite shared library that is located on the host system rather than package its own. It turns out that the SQLite library is not included in the CPython installation.

So let’s locate the libsqlite3.so file in our manylinux Docker container and copy it our host AppDir usr/lib directory.

From our Docker container:

[root@37465ae02dde ~]# find / -name libsqlite3.so -print
/usr/local/lib/libsqlite3.so
[root@37465ae02dde ~]# cd /usr/local/lib
[root@37465ae02dde lib]# ls -FC
libltdl.a    libltdl.so.7*	libsqlite3.so.0@
libltdl.la*  libltdl.so.7.3.2*	libsqlite3.so.3.49.1*
libltdl.so*  libsqlite3.so@	pkgconfig/
[root@37465ae02dde lib]# ls -l
total 1264
-rw-r--r-- 1 root root   62314 Apr 19 04:54 libltdl.a
-rwxr-xr-x 1 root root     922 Apr 19 04:54 libltdl.la
-rwxr-xr-x 3 root root   38872 Apr 19 04:54 libltdl.so
-rwxr-xr-x 3 root root   38872 Apr 19 04:54 libltdl.so.7
-rwxr-xr-x 3 root root   38872 Apr 19 04:54 libltdl.so.7.3.2
lrwxrwxrwx 1 root root      20 Apr 19 04:56 libsqlite3.so -> libsqlite3.so.3.49.1
lrwxrwxrwx 1 root root      20 Apr 19 04:56 libsqlite3.so.0 -> libsqlite3.so.3.49.1
-rwxr-xr-x 1 root root 1095064 Apr 19 04:56 libsqlite3.so.3.49.1
drwxr-xr-x 2 root root    4096 Apr 19 04:56 pkgconfig
[root@37465ae02dde lib]#

Thar she blows! SQLite located! We will want to copy the library and then create comparable symbolic links.

From the host system:

dev@host:~/AppImage/Moo_World.AppDir$ cd usr/lib
dev@host:~/AppImage/Moo_World.AppDir/usr/lib$ docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
37465ae02dde   01bf8a9e3a53   "manylinux-entrypoin…"   10 minutes ago   Up 10 minutes             dazzling_allen
dev@host:~/AppImage/Moo_World.AppDir/usr/lib$ docker cp 37465ae02dde:/usr/local/lib/libsqlite3.so.3.49.1 .
Successfully copied 1.1MB to /home/dev/AppImage/Moo_World.AppDir/usr/lib/.
dev@host:~/AppImage/Moo_World.AppDir/usr/lib$ ls -l
total 1072
-rwxr-xr-x 1 dev dev 1095064 Apr 19 04:56 libsqlite3.so.3.49.1
dev@host:~/AppImage/Moo_World.AppDir/usr/lib$ ln -s libsqlite3.so.3.49.1 libsqlite3.so
dev@host:~/AppImage/Moo_World.AppDir/usr/lib$ ln -s libsqlite3.so.3.49.1 libsqlite3.so.0
dev@host:~/AppImage/Moo_World.AppDir/usr/lib$ ls -l
total 1072
lrwxrwxrwx 1 dev dev      20 May  7 20:10 libsqlite3.so -> libsqlite3.so.3.49.1
lrwxrwxrwx 1 dev dev      20 May  7 20:11 libsqlite3.so.0 -> libsqlite3.so.3.49.1
-rwxr-xr-x 1 dev dev 1095064 Apr 19 04:56 libsqlite3.so.3.49.1
dev@host:~/AppImage/Moo_World.AppDir/usr/lib$

Okay. We have copied the SQLite libraries and created our symbolic links to ensure the CPython engine will be able to find and link to the library.

Let’s try running it again. It must work now. What are the chances that it fails on the second try? That never happens. EXTREME SARCASM

dev@host:~/AppImage/Moo_World.AppDir$ ./AppRun
[INFO   ] [Logger      ] Record log in /home/dev/.kivy/logs/kivy_25-05-07_4.txt
[INFO   ] [Kivy        ] v2.3.1
[INFO   ] [Kivy        ] Installed at "/home/dev/AppImage/Moo_World.AppDir/usr/share/comcow/lib/python3.13/site-packages/kivy/__init__.py"
[INFO   ] [Python      ] v3.13.3 (main, Apr 19 2025, 05:04:48) [GCC 14.2.1 20250110 (Red Hat 14.2.1-7)]
[INFO   ] [Python      ] Interpreter at "/home/dev/AppImage/Moo_World.AppDir/usr/share/cpython-3.13.3/bin/python3"
[INFO   ] [Logger      ] Purge log fired. Processing...
[INFO   ] [Logger      ] Purge finished!
[INFO   ] [KivyMD      ] 1.2.0, git-Unknown, 2025-04-26 (installed at "/home/dev/AppImage/Moo_World.AppDir/usr/share/comcow/lib/python3.13/site-packages/kivymd/__init__.py")
[WARNING] [KivyMD      ] Version 1.2.0 is deprecated and is no longer supported. Use KivyMD version 2.0.0 from the master branch (pip install https://github.com/kivymd/KivyMD/archive/master.zip)
[INFO   ] [Factory     ] 195 symbols loaded
[INFO   ] [Image       ] Providers: img_tex, img_dds, img_sdl2, img_pil (img_ffpyplayer ignored)
[INFO   ] [Text        ] Provider: sdl2
 Traceback (most recent call last):
   File "/home/dev/AppImage/Moo_World.AppDir/usr/bin/comcow.py", line 27, in <module>
     import tkinter
   File "/home/dev/AppImage/Moo_World.AppDir/usr/share/cpython-3.13.3/lib/python3.13/tkinter/__init__.py", line 38, in <module>
     import _tkinter # If this fails your Python may not be configured for Tk
     ^^^^^^^^^^^^^^^
 ImportError: libtk8.6.so: cannot open shared object file: No such file or directory
dev@host:~/AppImage/Moo_World.AppDir$

Still doesn’t work. But wait! Our SQLite error is gone! We have a new issue to resolve.

This is the same type of problem as we had with SQLite. Although tkinter is a standard Python module, CPython front ends tcl/Tk libraries. We will do the same thing we did to fix the libsqlite3.so link error and copy both libtcl and libtk.

From Docker container:

[root@37465ae02dde ~]# find / -name libtk8.6.so -print
/usr/lib64/libtk8.6.so
/opt/_internal/pp311-pypy311_pp73/lib/libtk8.6.so
/opt/_internal/pp310-pypy310_pp73/lib/libtk8.6.so
[root@37465ae02dde ~]#

Here we see three occurrences. But two of them are packaged with PyPy. We will grab the one in /usr/lib64.

From host:

dev@host:~/AppImage/Moo_World.AppDir/usr/lib$ docker cp 37465ae02dde:/usr/lib64/libtk8.6.so .
Successfully copied 1.53MB to /home/dev/AppImage/Moo_World.AppDir/usr/lib/.
dev@host:~/AppImage/Moo_World.AppDir/usr/lib$

Now it should work. Right. It has been scientifically proven that third time is the charm. Science wouldn’t fail us. BEYOND EXTREME SARCASM

dev@host:~/AppImage/Moo_World.AppDir$ ./AppRun
[INFO   ] [Logger      ] Record log in /home/dev/.kivy/logs/kivy_25-05-07_5.txt
[INFO   ] [Kivy        ] v2.3.1
[INFO   ] [Kivy        ] Installed at "/home/dev/AppImage/Moo_World.AppDir/usr/share/comcow/lib/python3.13/site-packages/kivy/__init__.py"
[INFO   ] [Python      ] v3.13.3 (main, Apr 19 2025, 05:04:48) [GCC 14.2.1 20250110 (Red Hat 14.2.1-7)]
[INFO   ] [Python      ] Interpreter at "/home/dev/AppImage/Moo_World.AppDir/usr/share/cpython-3.13.3/bin/python3"
[INFO   ] [Logger      ] Purge log fired. Processing...
[INFO   ] [Logger      ] Purge finished!
[INFO   ] [KivyMD      ] 1.2.0, git-Unknown, 2025-04-26 (installed at "/home/dev/AppImage/Moo_World.AppDir/usr/share/comcow/lib/python3.13/site-packages/kivymd/__init__.py")
[WARNING] [KivyMD      ] Version 1.2.0 is deprecated and is no longer supported. Use KivyMD version 2.0.0 from the master branch (pip install https://github.com/kivymd/KivyMD/archive/master.zip)
[INFO   ] [Factory     ] 195 symbols loaded
[INFO   ] [Image       ] Providers: img_tex, img_dds, img_sdl2, img_pil (img_ffpyplayer ignored)
[INFO   ] [Text        ] Provider: sdl2
Moo World!
dev@host:~/AppImage/Moo_World.AppDir$

Hey! It worked!

Building and Testing Our Complex Cow’s AppImage

Now we need to package it up and move it to our test environment.

dev@host:~/AppImage$ ARCH=x86_64 appimagetool Moo_World.AppDir
appimagetool, continuous build (git version c247c92), build 246 built on 2025-03-10 23:33:23 UTC
Using architecture x86_64
/home/dev/AppImage/Moo_World.AppDir should be packaged as Moo_World-x86_64.AppImage
WARNING: AppStream upstream metadata is missing, please consider creating it
         in usr/share/metainfo/moo_world.appdata.xml
         Please see https://www.freedesktop.org/software/appstream/docs/chap-Quickstart.html#sect-Quickstart-DesktopApps
         for more information or use the generator at
         https://docs.appimage.org/packaging-guide/optional/appstream.html#using-the-appstream-generator
Generating squashfs...
Downloading runtime file from https://github.com/AppImage/type2-runtime/releases/download/continuous/runtime-x86_64
Downloaded runtime binary of size 944632
Parallel mksquashfs: Using 4 processors
Creating 4.0 filesystem on Moo_World-x86_64.AppImage, block size 131072.
[=============================================================/] 7696/7696 100%

Exportable Squashfs 4.0 filesystem, zstd compressed, data block size 131072
	compressed data, compressed metadata, compressed fragments,
	compressed xattrs, compressed ids
	duplicates are removed
Filesystem size 52561.15 Kbytes (51.33 Mbytes)
	26.83% of uncompressed filesystem size (195875.31 Kbytes)
Inode table size 65976 bytes (64.43 Kbytes)
	25.65% of uncompressed inode table size (257205 bytes)
Directory table size 75863 bytes (74.08 Kbytes)
	38.38% of uncompressed directory table size (197684 bytes)
Number of duplicate files found 553
Number of inodes 7873
Number of files 6940
Number of fragments 679
Number of symbolic links 17
Number of device nodes 0
Number of fifo nodes 0
Number of socket nodes 0
Number of directories 916
Number of hard-links 62
Number of ids (unique uids + gids) 1
Number of uids 1
	root (0)
Number of gids 1
	root (0)
Embedding ELF...
Marking the AppImage as executable...
Embedding MD5 digest
Success

Please consider submitting your AppImage to AppImageHub, the crowd-sourced
central directory of available AppImages, by opening a pull request
at https://github.com/AppImage/appimage.github.io
dev@host:~/AppImage$

Let’s do a quick test on our development server:

dev@host:~/AppImage$ ls
comcow.venv.tar.gz  cp313.tar.gz  Moo_World.AppDir  Moo_World-x86_64.AppImage
dev@host:~/AppImage$ ./Moo_World-x86_64.AppImage
[INFO   ] [Logger      ] Record log in /home/dev/.kivy/logs/kivy_25-05-07_6.txt
[INFO   ] [Kivy        ] v2.3.1
[INFO   ] [Kivy        ] Installed at "/tmp/.mount_Moo_WofbdAoB/usr/share/comcow/lib/python3.13/site-packages/kivy/__init__.py"
[INFO   ] [Python      ] v3.13.3 (main, Apr 19 2025, 05:04:48) [GCC 14.2.1 20250110 (Red Hat 14.2.1-7)]
[INFO   ] [Python      ] Interpreter at "/tmp/.mount_Moo_WofbdAoB/usr/share/cpython-3.13.3/bin/python3"
[INFO   ] [Logger      ] Purge log fired. Processing...
[INFO   ] [Logger      ] Purge finished!
[INFO   ] [KivyMD      ] 1.2.0, git-Unknown, 2025-04-26 (installed at "/tmp/.mount_Moo_WofbdAoB/usr/share/comcow/lib/python3.13/site-packages/kivymd/__init__.py")
[WARNING] [KivyMD      ] Version 1.2.0 is deprecated and is no longer supported. Use KivyMD version 2.0.0 from the master branch (pip install https://github.com/kivymd/KivyMD/archive/master.zip)
[INFO   ] [Factory     ] 195 symbols loaded
[INFO   ] [Image       ] Providers: img_tex, img_dds, img_sdl2, img_pil (img_ffpyplayer ignored)
[INFO   ] [Text        ] Provider: sdl2
Moo World!
dev@host:~/AppImage$

NOTE NOTE: One of the bonus features of using Kivy in this example is the log info it outputs to your terminal. You can see on the third output line that the Kivy package is being picked up from inside the AppImage filesystem. The fourth and fifth lines show that the Python engine being used is also the one that we packaged. You can add your own Python to your code to dump this info, but in this case it is done for us.

Runs fine. But we want to test this on a bare-bones installation. I have chosen PureOS 10 for this test. It uses an older version of CPython which helps demonstrate our packaged Python runtime system.

dev@PureOS:~$ cat /etc/lsb-release
DISTRIB_ID=PureOS
DISTRIB_RELEASE=10.x
DISTRIB_CODENAME=byzantium
DISTRIB_DESCRIPTION="PureOS 10"
dev@PureOS:~$ python3 --version
Python 3.9.2
dev@PureOS:~$ ./Moo_World-x86_64.AppImage
[INFO   ] [Logger      ] Record log in /home/dev/.kivy/logs/kivy_25-05-07_1.txt
[INFO   ] [Kivy        ] v2.3.1
[INFO   ] [Kivy        ] Installed at "/tmp/.mount_Moo_WoGBEOBO/usr/share/comcow/lib/python3.13/site-packages/kivy/__init__.py"
[INFO   ] [Python      ] v3.13.3 (main, Apr 19 2025, 05:04:48) [GCC 14.2.1 20250110 (Red Hat 14.2.1-7)]
[INFO   ] [Python      ] Interpreter at "/tmp/.mount_Moo_WoGBEOBO/usr/share/cpython-3.13.3/bin/python3"
[INFO   ] [Logger      ] Purge log fired. Processing...
[INFO   ] [Logger      ] Purge finished!
[INFO   ] [KivyMD      ] 1.2.0, git-Unknown, 2025-04-26 (installed at "/tmp/.mount_Moo_WoGBEOBO/usr/share/comcow/lib/python3.13/site-packages/kivymd/__init__.py")
[WARNING] [KivyMD      ] Version 1.2.0 is deprecated and is no longer supported. Use KivyMD version 2.0.0 from the master branch (pip install https://github.com/kivymd/KivyMD/archive/master.zip)
[INFO   ] [Factory     ] 195 symbols loaded
[INFO   ] [Image       ] Providers: img_tex, img_dds, img_sdl2, img_pil (img_ffpyplayer ignored)
[INFO   ] [Text        ] Provider: sdl2
Moo World!
dev@PureOS:~$

If you look through the text here you can see we are on PureOS 10 which uses CPython 3.9.2. If you look at the Python INFO lines in the Kivy output log you can see we are using 3.13.3 and the path to it is /tmp/.mount_Moo_WoGBEOBO/usr/share/cpython-3.13.3/bin/python3 which is inside our AppImage file’s squashfs filesystem.

That is exactly what we want!

Summary

Now this “Complex Cow” is still pretty spherical. I just made it complex enough so you could get the idea of how you would go about packaging a Python runtime system with your Python app. This just happens to be the dependencies needed to package Muscle Buddy. What all you will have to package for your app will likely be different.

There might be other complications you may need to address such as needing writable file space for your app. If that is the case you may need to add some logic to set up that space in your AppRun script, inside your Python application, or write an installer program. The way you solve that issue is up to you.

In addition I recommend you test your AppImage file on as many bare-bones installations you can manage. I have six bare-bones VMs for this purpose and plan on setting up more in the future.

Also by the time you read this there may be better tools available to make your AppImage. People in the Python and Linux FOSS community are racing to develop tools to make the process of building AppImages easier and I personally am betting on some of those horses.

If you are still reading this, I congratulate you on your fortitude for wading through this process. If you have read and understood this blog entry, I am confident that you have what it takes to tackle your very own Complex Cow.

I hope you found this more complex AppImage case helpful.

Enjoy!