Suramya's Blog : Welcome to my crazy life…

October 2, 2022

Upgrading Debian Unstable – How to avoid obvious problems

Filed under: Computer Tips,Linux/Unix Related,Tech Related — Suramya @ 11:59 PM

If you are using Debian Unstable there is a possibility that your system might not work correctly after an upgrade, because as the name states it is an ‘unstable’ distribution that might have bugs. I use it because Debian Stable has older versions of the software available and I want to the latest versions if possible. Plus I don’t mind tinkering with the system if things break so that helps as well. Over the years I have found some easy ways to prevent the most obvious problems when upgrading and I will share them here.

First option is to upgrade the system regularly. You can decide what the frequency of the upgrade is but do it regularly. I upgrade twice a month and that ensures that the system has the latest updates and we are not so far out of sync that we need to download a ton of files for the upgrade. This is very useful when you don’t have much free space available in the root partition as the longer you wait the more files you have to download and the less free space we have for the actual upgrade.

Another thing I do that has helped me a lot is to ensure that you look at the packages being upgraded, specifically any packages being removed. Don’t upgrade if there are a lot of packages being removed without updated versions being installed. To give an example, I tried upgrading my system yesterday and it told me that it was going to “457 upgraded, 11 newly installed, 297 to remove and 0 not upgraded.” Looking at the packages it was going to remove I found that if I had blindly allowed the upgrade to proceed it would have ended up uninstalling my entire KDE install, VPN server and a whole bunch of other stuff. I waited for a day and tried again and the bug that was causing the system to insist on removing KDE during the upgrade was resolved and I was able to upgrade successfully.

I also pipe the output from the apt-get dist-upgrade command to a log file so that I have a log of what was changed and any errors are logged so I can look at it later in case there are issue. The command I use for that is as below:

apt-get dist-upgrade 2>&1 |tee ~suramya/Documents/Suramya/Computer\ Update\ Logs/StarKnight/2022/10032022

I keep all the logs from the upgrades so I can see exactly what was changed on the system and when. Makes it a lot easier to troubleshoot issues caused due to an upgrade.

If you have multiple systems, then I recommend you don’t upgrade all of them at the same time. I stagger them by a day or two so that in case of issues I have a working system. This has saved my sanity a few times.

Well, this is all for now. Do share any tips you might have for avoiding issues during an upgrade.

– Suramya

August 12, 2022

Multiple Linux Live CDs on a single USB Drive

Filed under: Computer Tips,Linux/Unix Related,Tech Related — Suramya @ 6:55 PM

Portable Boot disks are a life saver for a techie and I usually carry one with me most of the time (Its part of my keychain 🙂 ) However, the issue I would face was that I could only carry one live CD at a time on a USB stick and if I wanted another one then I would either have to search for the pendrive where I had already installed it or burn another one to the drive which was annoying, especially when I had to switch between OS’s frequently.

So I started searching for an alternative, something similar to the Ultimate Boot CD that allowed you to have multiple diagnostic tools on a CD but for Live Distros and installation media. Tried a bunch of ways but the easiest way I found was to use Ventoy to create a bootable USB.

You can download Ventoy from their GitHub Releases page, and the installation of the tool is as easy as extracting the file to a folder on your system and then running the correct executable for your system (They have executable’s for all architectures). Once you run the file as root, select the USB disk you want to use and click install. It takes about a minute for the software to install on the drive and once completed, it creates two partitions on the disk. The first partition named VTOYEFI is reserved for the boot files by Ventoy so ensure that you don’t change anything in that partition.

The second partition called Ventoy, is an exFAT partition and this is where we will copy all the ISO files for the distributions we want the disk to support. Installing a new OS/Tool/CD is as simple as copying the ISO file for the CD on to the partition. Once we have copied the files to the partition all you have to do is unmount the partition and your new disk is ready to use.

I installed the Debian Installer, Kali Live CD and Kali Installed on a 8GB drive with no issues. When I boot from the disk, I get a menu asking me to select the ISO I want to boot into and then the system boots into the boot menu for that image. So now I can carry one pen-drive with all the OS’s I would need to troubleshoot a system or reinstall the OS. I think you should be able to boot into windows installer as well using this method but I haven’t tried it yet so can’t confirm for sure.

Well, this is all for now. Will post more later.

– Suramya

July 30, 2022

Identifying the least used packages on Debian

My main system was running low on disk space in the root partition and I wanted to clean out some of the unused software from the system. In order to do that I thought that I should find out what the least used applications on my system were and then remove them. Unfortunately I couldn’t find any existing way of doing this so it was a dead end. However, the problem remained stuck in my head and I came up with a quick and dirty way of identifying the packages and when they were last used.

The way it works is:

  • Get a list of all files on the system (using locate, since its already there so why duplicate effort)
  • For each file figure out what package it belongs to using dpkg-query -S
  • If the file belongs to a package, get the last access time (using stat) and log it
  • Once we do this for all files, sort the results.

This gives us a list of packages and the latest access date for each package (based on the latest access date for any of the files in it). Since this is a quick and dirty implementation, it is slow as molasses, doesn’t have any error checking or anything but still gets the job done. Would love to get some feedback. The code is available at: https://github.com/suramyatomar/leastUsedPackage.

The output of the script looks like:

...
...
xz-utils | 2022-07-18
yelp-xsl | 2022-04-05
yelp-xsl | 2022-04-05
youtube-dl | 2022-07-17
zim | 2022-07-17
zip | 2022-07-17
zlib1g-dev | 2022-07-17
zlib1g-dev | 2022-07-17
zlib1g-dev | 2022-07-17
zstd | 2022-07-18

Feel free to try it out if you have a similar usecase. Let me know if you have any suggestions on improving the script or if you found it useful.

– Suramya

July 9, 2022

Some lesser known Useful Linux commands

Filed under: Computer Tips,Knowledgebase,Linux/Unix Related,Tech Related — Suramya @ 7:15 AM

In this post I am sharing some useful Linux commands originally posted by Traw on Twitter. As it is almost impossible to find stuff on Twitter (even if you favorite it) I am consolidating the entire thread here as a blog post for my reference:

lsmem:

lsmem lists the ranges of available memory with their online status. The listed memory blocks correspond to the memory block representation in sysfs. The command also shows the memory block size, the device size, and the amount of memory in online and offline state. The output looks like:

suramya@StarKnight:~$ lsmem
RANGE                                  SIZE  STATE REMOVABLE  BLOCK
0x0000000000000000-0x00000000cfffffff  3.3G online       yes   0-25
0x0000000100000000-0x000000052fffffff 16.8G online       yes 32-165

Memory block size:       128M
Total online memory:      20G
Total offline memory:      0B

lsusb

lsusb lists all the USB buses in the system and the associated devices connected to them. A good way to figure out what USB devices are connected and what the vendor ID and the product ID associated with them. The output looks like:

suramya@StarKnight:~$ lsusb
Bus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 005 Device 032: ID 03f0:3b17 HP, Inc LaserJet M1005 MFP
Bus 005 Device 029: ID 8564:4000 Transcend Information, Inc. microSD/SD/CF UHS-II Card Reader [RDF8, RDF9]
Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 002: ID 0b05:18f3 ASUSTek Computer, Inc. AURA LED Controller
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 005: ID 8087:0029 Intel Corp. AX200 Bluetooth
Bus 001 Device 004: ID 05e3:0610 Genesys Logic, Inc. Hub
Bus 001 Device 003: ID 413c:2113 Dell Computer Corp. KB216 Wired Keyboard
Bus 001 Device 002: ID 0951:16bc Kingston Technology HyperX Pulsefire FPS Gaming Mouse
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

lsb_release

The lsb_release command displays LSB (Linux Standard Base) information about your specific Linux distribution, including version number, release codename, and distributor ID. The output looks like:

suramya@StarKnight:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux bookworm/sid
Release:        testing/unstable
Codename:       n/a

lsfd

lsfd is a replacement for lsof and lists the file descriptors On Linux systems, it is tailored to the Linux kernel and supports Linux-specific features such as namespaces etc. The output looks like:

suramya@StarKnight:~$ lsfd |more
COMMAND            PID    USER  ASSOC MODE TYPE              SOURCE MNTID      INODE NAME
syncthing         1134 suramya    exe  ---  REG                sda5     0     265927 /usr/bin/syncthing
syncthing         1134 suramya    cwd  ---  DIR                sda5     0          2 /
syncthing         1134 suramya    rtd  ---  DIR                sda5     0          2 /
syncthing         1134 suramya cgroup  ---  REG                 0:4     0 4026531835 cgroup:[4026531835]
syncthing         1134 suramya    ipc  ---  REG                 0:4     0 4026531839 ipc:[4026531839]
syncthing         1134 suramya    mnt  ---  REG                 0:4     0 4026533012 mnt:[4026533012]
syncthing         1134 suramya    net  ---  REG                 0:4     0 4026531840 net:[4026531840]
syncthing         1134 suramya    pid  ---  REG                 0:4     0 4026531836 pid:[4026531836]
syncthing         1134 suramya  pid4c  ---  REG                 0:4     0 4026531836 pid:[4026531836]
syncthing         1134 suramya   time  ---  REG                 0:4     0 4026531834 time:[4026531834]
syncthing         1134 suramya time4c  ---  REG                 0:4     0 4026531834 time:[4026531834]
syncthing         1134 suramya   user  ---  REG                 0:4     0 4026531837 user:[4026531837]
syncthing         1134 suramya    uts  ---  REG                 0:4     0 4026531838 uts:[4026531838]
syncthing         1134 suramya    mem  r-x  REG                sda5     0     265927 /usr/bin/syncthing
syncthing         1134 suramya    mem  r--  REG                sda5     0     265927 /usr/bin/syncthing
syncthing         1134 suramya    mem  rw-  REG                sda5     0     265927 /usr/bin/syncthing

lsof

The command lsof stands for List Of Open Files. This command displays a list of files that have been opened. Essentially, it provides information to determine which files are opened by which process. The output looks like:

root@StarKnight:/tmp# lsof |more
COMMAND      PID    TID TASKCMD               USER   FD      TYPE             DEVICE    SIZE/OFF       NODE NAME
systemd        1                              root  cwd       DIR                8,5        4096          2 /
systemd        1                              root  rtd       DIR                8,5        4096          2 /
systemd        1                              root  txt       REG                8,5     1841792     277271 /usr/lib/systemd/systemd
systemd        1                              root  mem       REG                8,5      161864     280226 /usr/lib/x86_64-linux-gnu/libgpg-error.so.0.33.0
systemd        1                              root  mem       REG                8,5     3081088     264360 /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1
systemd        1                              root  mem       REG                8,5       26984     273912 /usr/lib/x86_64-linux-gnu/libcap-ng.so.0.0.0
systemd        1                              root  mem       REG                8,5      633512     270536 /usr/lib/x86_64-linux-gnu/libpcre2-8.so.0.11.0
systemd        1                              root  mem       REG                8,5     1321424     264366 /usr/lib/x86_64-linux-gnu/libm-2.33.so
systemd        1                              root  mem       REG                8,5      158400     279628 /usr/lib/x86_64-linux-gnu/liblzma.so.5.2.5
systemd        1                              root  mem       REG                8,5      751840     263041 /usr/lib/x86_64-linux-gnu/libzstd.so.1.5.2
systemd        1                              root  mem       REG                8,5      137568     269425 /usr/lib/x86_64-linux-gnu/liblz4.so.1.9.3
systemd        1                              root  mem       REG                8,5       35280     262500 /usr/lib/x86_64-linux-gnu/libip4tc.so.2.0.0
systemd        1                              root  mem       REG                8,5     1332480     262198 /usr/lib/x86_64-linux-gnu/libgcrypt.so.20.4.1
systemd        1                              root  mem       REG                8,5       18768     264301 /usr/lib/x86_64-linux-gnu/libdl-2.33.so
systemd        1                              root  mem       REG                8,5      202680     264320 /usr/lib/x86_64-linux-gnu/libcrypt.so.1.1.0
systemd        1                              root  mem       REG                8,5       38864     267169 /usr/lib/x86_64-linux-gnu/libcap.so.2.44

lscpu

lscpu gathers CPU architecture information from sysfs, /proc/cpuinfo, and any architecture-specific libraries that are applicable (e.g. librtas on Powerpc). The command output can be optimized for parsing or human readability. This can include the number of CPU’s, threads, cores, etc. The output looks like:

suramya@StarKnight:~$ lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         43 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  16
  On-line CPU(s) list:   0-15
Vendor ID:               AuthenticAMD
  Model name:            AMD Ryzen 7 3800X 8-Core Processor
    CPU family:          23
    Model:               113
    Thread(s) per core:  2
    Core(s) per socket:  8
    Socket(s):           1
    Stepping:            0
    Frequency boost:     enabled
    CPU(s) scaling MHz:  52%
    CPU max MHz:         4558.8862
    CPU min MHz:         2200.0000
    BogoMIPS:            7786.11
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse
                         3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_p
                         state ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbr
                         v svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization features: 
  Virtualization:        AMD-V
Caches (sum of all):     
  L1d:                   256 KiB (8 instances)
  L1i:                   256 KiB (8 instances)
  L2:                    4 MiB (8 instances)
  L3:                    32 MiB (2 instances)
NUMA:                    
  NUMA node(s):          1
  NUMA node0 CPU(s):     0-15
Vulnerabilities:         
  Itlb multihit:         Not affected
  L1tf:                  Not affected
  Mds:                   Not affected
  Meltdown:              Not affected
  Spec store bypass:     Mitigation; Speculative Store Bypass disabled via prctl
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Retpolines, IBPB conditional, STIBP conditional, RSB filling
  Srbds:                 Not affected
  Tsx async abort:       Not affected

lslogins

lslogins displays information about known users in the system. It examines the wtmp and btmp logs, /etc/shadow (if necessary) along with /etc/passwd to get the desired data.

suramya@StarKnight:~$ lslogins
  UID USER              PROC PWD-LOCK PWD-DENY  LAST-LOGIN GECOS
    0 root               306                   Apr06/15:36 root

lspci

lspci is a command on Unix-like operating systems that prints detailed information about all PCI buses and devices in the system. It is based on a common portable library libpci which offers access to the PCI configuration space on a variety of operating systems. The output looks like:

suramya@StarKnight:~$ lspci
00:00.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Root Complex
00:00.2 IOMMU: Advanced Micro Devices, Inc. [AMD] Starship/Matisse IOMMU
00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
00:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
...
0b:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller

lsipc

lsipc shows information on the System V inter-process communication facilities for which the calling process has read access. The output looks like:

suramya@StarKnight:~$ lsipc
RESOURCE DESCRIPTION                                              LIMIT USED  USE%
MSGMNI   Number of message queues                                 32000    0 0.00%
MSGMAX   Max size of message (bytes)                                 8K    -     -
MSGMNB   Default max size of queue (bytes)                          16K    -     -
SHMMNI   Shared memory segments                                    4096    4 0.10%
SHMALL   Shared memory pages                       18446744073692774399 1728 0.00%
SHMMAX   Max size of shared memory segment (bytes)                  16E    -     -
SHMMIN   Min size of shared memory segment (bytes)                   1B    -     -
SEMMNI   Number of semaphore identifiers                          32000    0 0.00%
SEMMNS   Total number of semaphores                          1024000000    0 0.00%
SEMMSL   Max semaphores per semaphore set.                        32000    -     -
SEMOPM   Max number of operations per semop(2)                      500    -     -
SEMVMX   Semaphore max value                                      32767    -     -

lslocks

lslocks lists information about all the currently held file locks in a Linux system. It also lists OFD (Open File Description) locks which are not associated with any process (PID is -1). OFD locks are associated with the open file description on which they are acquired. The output looks like:

suramya@StarKnight:~$ lslocks |more
COMMAND            PID  TYPE  SIZE MODE  M      START        END PATH
pipewire          1483 FLOCK       WRITE 0          0          0 /run/user/1000/pipewire-0.lock
firefox-bin      18608 POSIX       WRITE 0          0          0 /mnt/data/Configs/.mozilla/firefox/6hzbxva3.default/.parentlock
firefox-bin      18608 POSIX       READ  0          0          0 /tmp/MozillaUpdateLock-CBDE0CC28E6567B7
plasmashell       1742 POSIX   88K READ  0 1073741826 1073742335 /home/suramya/.local/share/kactivitymanagerd/resources/database
plasmashell       1742 POSIX   32K READ  0        128        128 /home/suramya/.local/share/kactivitymanagerd/resources/database-shm
systemsettings    2116 POSIX   88K READ  0 1073741826 1073742335 /home/suramya/.local/share/kactivitymanagerd/resources/database
systemsettings    2116 POSIX   32K READ  0        128        128 /home/suramya/.local/share/kactivitymanagerd/resources/database-shm
cron               900 FLOCK       WRITE 0          0          0 /run...
kactivitymanage   1754 POSIX   88K READ  0 1073741826 1073742335 /home/suramya/.local/share/kactivitymanagerd/resources/database
kactivitymanage   1754 POSIX   32K READ  0        128        128 /home/suramya/.local/share/kactivitymanagerd/resources/database-shm
firefox-bin      18608 POSIX   75M WRITE 0 1073741826 1073742335 /mnt/data/Configs/.mozilla/firefox/6hzbxva3.default/places.sqlite
firefox-bin      18608 POSIX 74.3M WRITE 0 1073741826 1073742335 /mnt/data/Configs/.mozilla/firefox/6hzbxva3.default/favicons.sqlite
kactivitymanage   1754 POSIX   32K READ  0        124        124 /home/suramya/.local/share/kactivitymanagerd/resources/database-shm

lsmod

lsmod shows the current status of loaded modules in the Linux Kernel. It nicely formats the contents of the /proc/modules , showing what kernel modules are currently loaded. The output looks like:

suramya@StarKnight:~$ lsmod
Module                  Size  Used by
loop                   32768  0
dm_crypt               61440  0
dm_mod                172032  1 dm_crypt
mptcp_diag             16384  0
tcp_diag               16384  0
udp_diag               16384  0
raw_diag               16384  0
inet_diag              24576  4 tcp_diag,mptcp_diag,raw_diag,udp_diag
unix_diag              16384  0
af_packet_diag         16384  0
netlink_diag           16384  0
uinput                 20480  0
xfrm_user              49152  2
xfrm_algo              16384  1 xfrm_user
...
...
twofish_generic        20480  0
twofish_avx_x86_64     53248  0
twofish_x86_64_3way    32768  1 twofish_avx_x86_64

lsirq

lsirq is a utility to display kernel interrupt information. The output looks like:

IRQ     TOTAL NAME
LOC 438495596 Local timer interrupts
RES 395250211 Rescheduling interrupts
CAL 244198954 Function call interrupts
TLB  50704087 TLB shootdowns
 43  36669756 IR-PCI-MSI 2621443-edge enp5s0-tx-0
 44  33219249 IR-PCI-MSI 2621444-edge enp5s0-tx-1
 42  29631348 IR-PCI-MSI 2621442-edge enp5s0-rx-1
 41  24214613 IR-PCI-MSI 2621441-edge enp5s0-rx-0
 63   5830480 IR-PCI-MSI 3670016-edge ahci[0000:07:00.0]
 45   4564010 IR-PCI-MSI 3147776-edge xhci_hcd
105   4129317 IR-PCI-MSI 4718592-edge nvidia
 64   3354988 IR-PCI-MSI 4194304-edge ahci0
 69   1788338 IR-PCI-MSI 4194309-edge ahci5
 65    157846 IR-PCI-MSI 4194305-edge ahci1
104     27444 IR-PCI-MSI 5775360-edge snd_hda_intel:card1
..
..

lsns

The lsns command lists information about all currently accessible namespaces or a given namespace. The namespace identifier is an inode number. The output looks like:

suramya@StarKnight:~$ lsns
        NS TYPE   NPROCS    PID USER    COMMAND
4026531834 time       87   1134 suramya /usr/bin/syncthing serve --no-browser --no-restart --logflags=0
4026531835 cgroup     87   1134 suramya /usr/bin/syncthing serve --no-browser --no-restart --logflags=0
4026531836 pid        87   1134 suramya /usr/bin/syncthing serve --no-browser --no-restart --logflags=0
4026531837 user       75   1134 suramya /usr/bin/syncthing serve --no-browser --no-restart --logflags=0
4026531838 uts        87   1134 suramya /usr/bin/syncthing serve --no-browser --no-restart --logflags=0
4026531839 ipc        76   1134 suramya /usr/bin/syncthing serve --no-browser --no-restart --logflags=0
4026531840 net        76   1134 suramya /usr/bin/syncthing serve --no-browser --no-restart --logflags=0
4026531841 mnt        85   1454 suramya /lib/systemd/systemd --user
4026532954 user        1 267290 suramya /usr/local/firefox/firefox-bin -contentproc -parentBuildID 20220705093820 -prefsLen 44808 -prefMapSize 237085 -appDir /usr/local/firefox/browser 267229 true socket
4026532955 ipc         1 267290 suramya /usr/local/firefox/firefox-bin -contentproc -parentBuildID 20220705093820 -prefsLen 44808 -prefMapSize 237085 -appDir /usr/local/firefox/browser 267229 true socket
...
...

lsattr

lsattr lists the file attributes on a second extended file system. The chattr command modifies the attributes of files, and lsattr lists (displays) them. File attributes are flags which affect how the file is stored and accessed by the filesystem. They are metadata stored in the file’s associated inode. The output looks like:

suramya@StarKnight:~$ lsattr
--------------e------- ./node_modules
--------------e------- ./Temp
--------------e------- ./Screenshot_20220704_122444.png
--------------e------- ./go
--------------e------- ./LinkedIn

lsblk

lsblk is used to display details about block devices and these block devices(Except ram disk) are basically those files that represent devices connected to the pc. It queries /sys virtual file system and udev db to obtain information that it displays. And it basically displays output in a tree-like structure. This command comes pre-installed with the util-Linux package. The output looks like:

suramya@StarKnight:~$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda      8:0    0 111.8G  0 disk 
├─sda1   8:1    0   3.7G  0 part [SWAP]
├─sda2   8:2    0     1K  0 part 
├─sda5   8:5    0  18.6G  0 part /
└─sda6   8:6    0  89.4G  0 part /mnt/data
sdb      8:16   0   2.7T  0 disk 
└─sdb1   8:17   0   2.7T  0 part /mnt/Backup
sdc      8:32   0 223.6G  0 disk 
└─sdc1   8:33   0 223.6G  0 part /mnt/storage
sdd      8:48   0  12.7T  0 disk 
└─sdd1   8:49   0  12.7T  0 part /mnt/repository

There are a lot more useful Linux commands and no blog post can possibily list all of them. But some of these were new to me so I thought I should share.

– Suramya

January 21, 2022

nerd-dictation: A fantastic Open Source speech to text software for Linux

After a long time of searching I finally found a speech to text software for Linux that actually works well enough that I can use it for dictating without having to jump through too many hoops to configure and use. The software is called nerd-dictation and is an open source software. It is fairly easy to setup as compared to the other voice-to-text systems that are available but still not at a stage where a non-tech savvy person would be able to install it easily. (There is effort ongoing to fix that)

The steps to install are fairly simple and documented below for reference:

  • pip3 install vosk
  • git clone https://github.com/ideasman42/nerd-dictation.git
  • cd nerd-dictation
  • wget https://alphacephei.com/kaldi/models/vosk-model-small-en-us-0.15.zip
  • unzip vosk-model-small-en-us-0.15.zip
  • mv vosk-model-small-en-us-0.15 model

nerd-dictation allows you to dictate text into any software or editor which is open so I can dictate into a word document or a blog post or even the command prompt. Previously I have used tried using software like otter.ai which actually works quite well but doesn’t allow you to edit the text as you’re typing, so you basically dictate the whole thing and the system gives you the transcription after you are done. So, you have to go back and edit/correct the transcript which can be a pain for long dictations. This software works more like Microsoft dictate which is built into Word. Unfortunately my word install on Linux using Crossover doesn’t allow me to use the built in dictate function and I have no desire to boot into windows just so that I can dictate a document.

This downloads the software in the current directory. I set it up on /usr/local but it is up to you where you want it. In addition, I would recommend that you install one of the larger dictionaries/models which makes the voice recognition a lot more accurate. However, do keep in mind that the larger models use up a lot more memory so you need to ensure that your computer has enough memory to support the larger models. The smaller ones can run on systems as small as a raspberry pi, so depending on your system configuration you can choose. The models are available here.

The software does have some quirks, like when you are talking and you pause it will take it as a start of a new sentence and for some reason it doesn’t put a space after the last word. So unless you’re careful you need to go back and add spaces to all the sentences that you have dictated, which can get annoying. (I started manually pressing space everytime I paused to add the space). Another issue is that it doesn’t automatically capitalize the words when you dictate such as those at the beginning of the sentence or the word ‘I’. This requires you to go back and edit, but that being said it still works a lot better than the other software that I have used so far on Linux. For Windows system Dragon Voice Dictation works quite well but is expensive. I tested it out by typing out this post using it and for the most part it does work it worked quite well.

Running the software again requires you to run commands on the commandline, but I configured shortcut keys to start and stop the dictation which makes it very convenient to use. Instructions on how to configure custom shortcut keys are available here. If you don’t want to do that, then you can start the transcription by issuing the following command (assuming the software is installed in /usr/local/nerd-dictation):

/usr/local/nerd-dictation/nerd-dictation begin --vosk-model-dir=/usr/local/nerd-dictation/model  --continuous

This starts the software and tells it that we are going to dictate for a long time. More details on the options available are available on the project site. To stop the software you should run the following command:

/usr/local/nerd-dictation/nerd-dictation end

I suggest you try this if you are looking for a speech-to-text software for Linux. Well this is all for now. Will post more later.

Thanks to Hacker News: Nerd-dictation, hackable speech to text on Linux for the link.

– Suramya

June 11, 2021

Dangers of online ‘free’ html editing services: Your site is now part of SEO scam for shady services

Filed under: Computer Tips,My Thoughts,Tech Related — Suramya @ 10:52 PM

There are a lot of free services available online for various tasks that historically required you to download and install software. For example, if you want to convert a .doc file to pdf or if you wanted to edit your image or even clean up / optimize your HTML files, you can use online free services for it. As with anything you need to take a look at who is running the site before you decide to upload your personal data to it. In addition it might be a good idea to take a look at the privacy policy & data retention policy of any such sites before you use them. If a site doesn’t have a privacy policy/data retention policy and wants you to upload your private data/files to it then it is a red flag.

Most recent case of such a misuse came into my notice a few days ago, where a few of the highly-ranked online tools for editing / cleaning your html code were secretly injecting scam/spam links into the code being edited to push themselves and their affiliated sites up the search engine rankings. SEO or Search Engine Optimization gives extra weight to sites that are linked to from other legitimate sites and when a html cleaner program adds links to their solutions into each site/page that they are editing they get a leg up on every other product because their have a lot more weighted links than their competition. (Links to the site are not the only thing SEO use to raise their profile but SEO optimization is a huge topic that I won’t be covering here in this post).

Caspar over at casparwre.de found this out while trying to figure out why he couldn’t be the top result for ‘online scoreboard’ on Google. You can check out the full write up here

For instance, I saw a blog post from the German Football Association containing a link to Scorecounter. The word that was linked was “score” – yet having a link here made absolutely no sense in the context of the article. What was going on? 🤔

Here are some more examples of links I found on random domains (you need to search for “score” on the page).

Macworld Shop
NBC Washington
RICE University (The link has now been removed)
Intuit Quickbooks (The link has now been removed)


So that was the secret: the creators of Scorecounter also made an online HTML editor which injects links for certain keywords. The beauty of this scam is that by injecting links to their own HTML editor, they have created a brilliant positive feedback loop: the higher the editor rises in the search rankings, the more people use it and the more secret links they can inject.

In one way this is a fantastic (if shady) way to ensure that your product is at the top of any search for a given text/question. But usually it is only a matter of time before people figure it out and then you loose a lot of goodwill and get a reputation for shady practices. How many people will continue to use their product if they knew that their site will be used to hawk products that they personally have not selected/validated?

I took a look at the privacy policy and the general website over at: html-cleaner.com and they don’t have any note letting people know that the site introduces links to it’s own services and other sites into your text. This is shady behavior. Some of the reputable sites that I have seen in the past, let you know that they will be adding a subtext or a note at the bottom of the page being edited stating that it was created using xyz service. Adding the links into the text of the site makes it seem that the owner of the site is endorsing the service, which obviously isn’t the case here.

To close the post, I just want to say you need to be careful where you upload data or what program you are using to edit/create things because if it is created by people with bad ethics they can and often do steal your private data or modify your data or use it for purposes other than what you intended when uploading it.

– Suramya

March 7, 2021

Syncing data between my machines and phones using syncthing

I have talked about how my Backup strategy has evolved over the years. I am quite happy with the setup I explained in my previous post except for one minor point. I still had to manually sync the data from my laptop, Jani’s laptop and my phone to my desktop manually. Once it is there on the desktop the various backup processes make sure that it is backed up and secure. The issue is that I still had to manually sync the data between the devices.

For my laptop, I used Unison to manually check for changes and then sync them over which works great but I had to ensure that the sync happened in the correct direction. For Jani’s laptop I mounted my drive on her computer over ssh using these steps and then running robocopy to copy the files over. This worked intermittently well. For some reason the system would refuse to overwrite changed files randomly with permission denied errors even when the permission was set to 777. The only way to fix was to delete all the files on my computer and then do a fresh sync. This worked, but was not userfriendly and required me to manually kick off a backup which I did infrequently. My phone on the other hand was backed up manually to my computer using sftp. This was very crumbersome and I really disliked having to do it.

I have in the past looked into various technologies that allow multiple devices to sync data with each other. Unfortunately, all of them required an external connection with a copy of the data being stored in the cloud. Since that was a show-stopper for me, I never got around to setting up my systems to automatically sync with each other. Then a few weeks ago, I came across this great article on how to create A Simple, Delay-Tolerant, Offline-Capable Mesh Network with Syncthing (+ optional NNCP). In the article John talked about Syncthing, which allowed him to create a local serverless, peer-to-peer, open source alternative to Dropbox that allowed his machines sync directly with each other without a server. In other words a perfect fit for what I wanted and needed to do. So I spent a little bit of time researching syncthing and then decided to take the plunge and setup my laptop and desktop to sync with each other. Before starting the setup I backed up all my data so that in case something went wrong I still had a backup. Thankfully nothing did, but it is always good to have a backup.

Syncthing’s installation is pretty simple for all major operating systems, except for iPhones which are not supported. In Debian, installation just required the following steps

  • Run the following commands to add the “stable” channel to your APT sources:
  • echo "deb https://apt.syncthing.net/ syncthing stable" | sudo tee /etc/apt/sources.list.d/syncthing.list
    curl -s https://syncthing.net/release-key.txt | sudo apt-key add -
  • Once you have added it, run the following command to install syncthing
  • sudo apt-get update
    sudo apt-get install syncthing

    Once the software is installed execute the syncthing binary. On my computer it is installed in /usr/bin/syncthing. Once the software starts, it will start the web interface automatically. There is also a Desktop application, but I prefer the web-ui. Instructions on how to configure the folders and nodes are available at the Getting Started Guide over on the project website so I am not going to repeat them here. Basically, you need to define the nodes and connect them to each other, if the devices are not added on both sites then the folders will not sync.

    The software has a cool feature of discovery, which makes it easy to add devices on a given node. As soon as you connect to the same network they detect each other and give you the option of connecting both. After the devices are connected, you configure the folder you want to sync and select the devices you want it synced with. The best part is as soon as you configure one node, the other nodes will get a message stating that Node 1 is attempting to share a folder with them. Clicking on accept, allows you to configure the folder path etc on the node and that’s it. The system will detect the files which need to get synced over and will copy them quickly. You can configure the sync to be bi-directional or one way. Most of the folders in my setup are set as that, the only exception are Jani’s files which is a one-way sync because I know that I am not going to modify the files on the server.

    Below is what the setup looks on my desktop, as you can see I am syncing data from 3 different computers/phones to it and the sync’s are really fast. I have copied files over to the folder on one computer and within minutes (depending on the size) they were replicated on the other computers/phone.


    My Syncthing setup

    I have the android client running on my phone as well, and it instantly syncs any new photos etc from my phone to the desktop. All I need to do is connect to the same LAN network (can be over wired or wireless) and the devices connect and sync automagically. There is an option to do so even over the WAN using relay server but since I didn’t want that I disabled it in the setup.

    Now all my data is synced to the desktop machine without me having to worry about anything or manually copying files around. Check it out if you want to sync your devices without using an external server.

    – Suramya

February 20, 2021

Fixing boinc (code=exited, status=108) error

Filed under: Computer Tips,Knowledgebase,Linux/Unix Related,Tech Related — Suramya @ 2:01 AM

Earlier today I noticed that my CPU was not as active as usual and the boinc (World Community Grid) processes were no longer active on my computer. This has happened in the past when the client crashed so I restarted the client using the following command as usual:

/etc/init.d/boinc-client restart

Unfortunately, that didn’t resolve the problem and I thought that it could be because of the recent OS update that I did to my Debian system. In the past there have been rare cases when libraries were updated that some programs act strangely till the computer is rebooted, so I restarted the machine expecting to see the process start up without issues. Sadly, that didn’t happen so I had to debug the problem and I tried all sorts of things to resolve.

First, I tried starting the program manually as the root user and that worked. So I knew it was something to do with the startup script. Then I searched for and removed all the lock files in the boinc and the boinc-client directory. That should have resolved the problem but it didn’t and then I tried running the status command which gave the following output:

root@StarKnight:/var/lib/boinc-client# /etc/init.d/boinc-client status
boinc-client.service – Berkeley Open Infrastructure Network Computing Client
Loaded: loaded (/lib/systemd/system/boinc-client.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2021-02-20 01:26:50 IST; 9s ago
Docs: man:boinc(1)
Process: 7420 ExecStart=/usr/bin/boinc (code=exited, status=108)
Process: 7455 ExecStopPost=/bin/rm -f lockfile (code=exited, status=0/SUCCESS)
Main PID: 7420 (code=exited, status=108)
CPU: 19ms

Feb 20 01:26:40 StarKnight systemd[1]: Started Berkeley Open Infrastructure Network Computing Client.
Feb 20 01:26:50 StarKnight boinc[7420]: 20-Feb-2021 01:26:50 Another instance of BOINC is running.
Feb 20 01:26:50 StarKnight systemd[1]: boinc-client.service: Main process exited, code=exited, status=108/n/a
Feb 20 01:26:50 StarKnight systemd[1]: boinc-client.service: Failed with result ‘exit-code’.

This meant that the system thought that another instance of the software was running but that wasn’t the case as I verified it using ps. A search for the status=108 code on the internet returned a few results but nothing that resolved my problem. One user who faced this issue resolved it by uninstalling everything and installing back but that wasn’t a step I wanted to take without trying everything else first so I kept researching. Then I saw a post where a user was facing the same issue after they had moved the data directory to another partition and symlinked it to the original location. I had done the same thing a few weeks ago so I moved the directory back to it’s original location but that didn’t resolve anything either.

Then I thought about checking the file ownerships of the directory and they were owned by my user (suramya) and a post on the internet said that they should be owned by root. I checked on my laptop as I have the same setup there and found that the directories were owned by the ‘boinc‘ on the laptop. Then I remembered changing the ownership of all files in one of my drive partitions last night to suramya. What I didn’t realize at that time was that the boinc-client directory was also located on that partition (after I had moved it there to recover space on my root partition).

I immediately changed the ownership of both directories back to boinc:boinc using the following command

chown boinc:boinc /var/lib/boinc* -R

Then I restarted the daemon and that fixed the problem. I then moved the directory back to it’s original location (on the other partition), symlinked it to the original location and the software still worked after I restarted the process.

I am documenting this in case others hit the same issue.

– Suramya

November 28, 2020

My Backup strategy and how it has evolved over the years

I am a firm believer in backing up my data, some people say that I am paranoid about backing up data and I do not dispute it. All my data is backed up on multiple drives and locations and still I feel that I need additional backup. This is because I read the news and there have been multiple cases where people lost their data because they hadn’t backed it up. Initially I wasn’t that serious about it but when I was in college and working at the helpdesk, a phd student came in crying because her entire PHD thesis was on a Zip Drive and it wasn’t working anymore. She didn’t have a backup and was basically screwed. We tried a bunch of stuff to recover the data but didn’t manage to recover anything. That made me realize that I needed a better backup procedure so started my journey in creating recoverable backups.

My first backup system was a partition on my drive called backup where I created a copy of all my important data (This is back in 2000/2001). Then I realized that if the drive died then I would loose access to the backup partition as well, and I started looking for alternatives. This is around the time when I had bought a CD Writer so all my important data was backed up to CD’s and I was confident that I could recover any lost data. Shortly afterwards I moved to DVD’s for easier storage. However, I didn’t realize till a lot later that CD’s & DVD’s start becoming unreadable quite easily. Thankfully I didn’t loose any data but it was a rude awakening to find that the disks I had expected to keep my data safe were starting to become unreadable within a few years.

I then did a bunch of research online and found that the best medium for storing data long term is still Hard Drives. I didn’t want to store anything online because I want my data to be in my control so any online backup system was out of the question. I added multiple drives to my desktop and started syncing the data from the desktop & laptop to the backup drive using rync. This ensured that the important data was in three locations at any given time: My Desktop, My Laptop and the Backup drive. (Plus a DVD copy that I made of all my data every year)

I continued with this backup strategy for a few years but then realized that I had no way to go back to a previous version of any given document, if I deleted a file or wanted to go back to an older version of a file I only had 24 hours before the changes were synced to the backup drive before it was unrecoverable. There was a case where I ended up having to dig through my DVD backups to find the original version of a file that I had changed. So I did a bit of research and found rdiff-backup. It allows a user to back up one directory to another and generates an incremental backup. So we can recover/restore files based on date range. The best part is that the software is highly efficient, once the initial backup is done it only transmits the changes to the files in subsequent runs. Now that I have been using it I can restore a snapshot of my data going back to 2012 quite easily.

I was quite happy with this setup for a while, but while reading an article on best backup practices I realized that I was still depending only on 1 location for the backup data (the rdiff-data snapshots) and the best practices stated that you should also store it in an external drive or offsite location to prevent viruses/ransomware from deleting backups. So I bought a 5TB external drive and created an encrypted partition on the same to store all my important data. But I was still unhappy because all of this was still stored at my home so if I had a fire or something I would still end up loosing the data even though my external drive was kept in a safe. I still didn’t want to store data online but that was still the best way to ensure I had offsite backup. I initially thought about setting a server at my parents place in Delhi and backup there but that didn’t work out for various reasons. Plus I didn’t want to have to call them and troubleshoot backup issues over the phone.

Around this time I was reading about encrypted partitions and came up with the idea of creating an encrypted container file to store my data and then backup the container file online. I followed the steps I outlined in my post How to encrypt your Hard-drive in Linux and created the encrypted container. Once I finished that I had to upload the container to my webhost since I had unlimited storage space as per my contract. Initially I wasn’t able to because they had restricted my account’s quota but a call to their customer support sorted it out after a bit of argument and explaining what I was doing. The next hurdle I faced was uploading the file to the server because of the ridiculously low upload speed I was getting from Airtel. I had a 40 mbps connection at the time but the upload speed was restricted to 1 mbps because of ‘reasons’. After arguing with their support for a while, I was complaining about it at work and one of the folks suggest I check out ACT Internet. I checked out their plans and was quite impressed with the offerings so I switched over to ACT and was able to upload the container file quickly and painlessly.

Once the container was uploaded, I had to tackle the next problem in the process which was on how to update the files in the container without having to upload the entire container to the host. I experimented with a few solutions and then came up with the following solution:

1. Mount the remote partition as a local mount using sshfs. I mounted the partition locally using the following command: (please replace with the correct hostname and username before using)

/usr/sbin/runuser -l suramya -c "sshfs -o allow_other @hostname.com:. /mnt/offsite/"

2. Once the remote partition was mounted locally, I was able to use the usual commands to mount the encrypted partition to another location using the following command:

/usr/sbin/cryptsetup luksOpen /mnt/offsite/container/Enc_vol1.img enc --key-file /root/UserKey.dat
mount /dev/mapper/enc /mnt/stash/

In an earlier iteration of the code I wasn’t using the keyfile so had to manually enter the password everytime I wanted to backup to the offsite location. This meant that the backup was done randomly as and when I remembered to run the command manually. A few days ago I finally configured it to run automatically after adding the keyfile as a decryption key. (Obviously the keyfile should be protected and not be accessible to others because it allows users to decrypt the data without entering a password.) Now the offsite backup runs once a week while the local backup runs daily and I still backup the Backup partition to the external drive as well manually as and when I remember to do so.

In all I was quite happy with my setup but then I was updating the encrypted container and a network issue made be believe that my remote container had become corrupted (it wasn’t but I thought it was). At the same time I was fooling around with Microsoft One Drive and saw that I had 1TB of storage available over there since I was a Office 365 subscriber. This gave me the idea of backing up the Container to OneDrive as well as my site hosting.

I first tried copying the entire container to the drive and hit a limit because the file was too large. So I thought I would split the file into 5GB parts and then sync them to OneDrive using rclone. After installing rclone. I configured it to connect to OneDrive by issuing the following command and following the onscreen prompts:

rclone config

I then created a folder on OnDrive called container to store the split files and then tried uploading a test file using the command:

rclone copy $file OneDrive:container

Where OneDrive is the name of my provider that I configured in the previous step. This was successful so I just needed to create a script that did the following:

1. Update the Container file with the latest backup
2. Split the Container file into 5GB pieces using the following command:

split --verbose -d -b5GB /mnt/repository/Container/Enc_vol1.img /mnt/repository/Container/Enc_vol_

3. Upload the pieces to Ondrive.

for file in `ls /mnt/repository/Container/Enc_vol_* |sort`; do  echo "$file";  /usr/bin/rclone copy $file OneDrive:container -v &> /tmp/oneDriveSync.log; done

This command uploads the pieces to the drive one at a time and is a bit slow because it maxes out the upload speed to ~2mbps. If you split the uploads and run the command in parallel then you get a lot faster speed. Keep in mind that if you are uploading more than 10 files at a time you will start getting errors about too many open connections and then you have to wait for a few hours before you can upload again. It took a while to upload the chunks but now my files are stored in yet another location and the system is configured to sync to Onedrive once a month.

So, as of now my files are backed up as following:

  • /mnt/Backup: Local Drive. All changes are backed up daily using rdiff-backup
  • /mnt/offsite: Encrypted Container stored online. All changes are backed up weekly using rsync
  • OneDrive: Encrypted Container stored at Microsoft OneDrive. All changes are backed up monthly using rsync
  • External Drive: Encrypted backup stored in an External Hard-drive using rsync. Changes are backed up infrequently manually.
  • Laptop: All Important files are copied over to the laptop using Unison/rsync manually so that I can access my data while traveling

Finally, I am also considering backing up the snapshot data to BlueRay disks but it will take time so haven’t gotten around to it yet.

Since I have this elaborate backup procedure I wasn’t worried much when one of my disks died last week and was able to continue work without issues or worries about loosing data. I still think I can enhance the backups I take but for now I am good. If you are interested in my backup script an extract of the code is listed below:

function check_failure ()
{
	if [ $? == 0 ]; then
		logger "INFO: $1 Succeeded"
	else
		logger "FATAL: Execution of $1 failed"
		wall "FATAL: Execution of $1 failed"
		exit 1
	fi
}

###
# Syncing to internal Backup Drive
###

function local_backup ()
{
	export BACKUP_ROOT=/mnt/Backup/Snapshots
	export PARENT_ROOT=/mnt/repository

	logger "INFO: Starting System Backup"

	rdiff-backup -v 5 /mnt/data/Documents/ $BACKUP_ROOT/Documents/
	check_failure "Backing up Documents"

	rdiff-backup -v 5 /mnt/repository/Documents/Jani/ $BACKUP_ROOT/Jani_Documents/
	check_failure "Backing up Jani Documents"

	rdiff-backup -v 5 $PARENT_ROOT/Programs/ $BACKUP_ROOT/Programs/
	check_failure "Backing up Programs"

	..
	..

	logger "INFO: All Backups Completed Successfully."
}

### 
# Syncing to Off-Site Backup location
###

function offsite_backup
{
	export PARENT_ROOT=/mnt/repository

	# First we mount the remote directory to local
	logger "INFO: Mounting External Drive"
	/usr/sbin/runuser -l suramya -c "sshfs -o allow_other username@remotehost:. /mnt/offsite/"
	check_failure "Mounting External Drive"

	# Open the Encrypted Partition
	logger "INFO: Opening Encrypted Partition. Please provide password."
	/usr/sbin/cryptsetup luksOpen /mnt/offsite/container/Enc_vol1.img enc --key-file /root/keyfile1
	check_failure "Mounting Encrypted Partition Part 1"

	# Mount the device
	logger "INFO: Mounting the drive"
	mount /dev/mapper/enc /mnt/stash/
	check_failure "Mounting Encrypted Partition Part 2"

	logger "INFO: Starting System Backup"
	rsync -avz --delete  /mnt/data/Documents /mnt/stash/
	check_failure "Backing up Documents offsite"
	rsync -avz --delete /mnt/repository/Documents/Jani/ /mnt/stash/Jani_Documents/
	check_failure "Backing up Jani Documents offsite"
	..
	..
	..

	umount /mnt/stash/
	/usr/sbin/cryptsetup luksClose enc
	umount /mnt/offsite/

	logger "INFO: Offsite Backup Completed"
}

This is how I make sure my data is backed up. All of Jani’s data is also backed up to my system using robocopy as she is running Windows and then the data gets backed up by the scripts I explained above as usual. I also have scripts to backup my website/blog/databases but that’s done using a simple script. Let me know if you are interested and I will share them as well.

This is all for now. Let me know if you have any questions about the backup strategy or if you want to make fun of me. 🙂 This is all for now. Will write more later.

– Suramya

September 30, 2020

How to fix vlc’s Core dumping issue while playing some videos

Over the past 2 days I found that the VLC install on my computer was suddenly having issues playing some of the video files on my computer. Initially I thought that it was a problem with the video file, then I realized that this was also happening with videos that had be playing fine earlier. When I ran vlc from the command line to play the problem video it gave the following output on screen when it crashed:

[00005587b42751b0] dummy interface: using the dummy interface module…
[00007f00c4004980] egl_x11 gl error: cannot select OpenGL API
[00007f00c4004980] gl gl: Initialized libplacebo v2.72.0 (API v72)
[00007f00c402a310] postproc filter error: Unsupported input chroma (VAOP)
[00007f00bd986e50] chain filter error: Too high level of recursion (3)
[00007f00c4028d40] main filter error: Failed to create video converter
[00007f00bd986e50] chain filter error: Too high level of recursion (3)
[00007f00c4028d40] main filter error: Failed to create video converter
[00007f00bd986e50] chain filter error: Too high level of recursion (3)
[00007f00c4028d40] main filter error: Failed to create video converter
[00007f00bd986e50] chain filter error: Too high level of recursion (3)


[00007f00c44265c0] chain filter error: Too high level of recursion (3)
[00007f00c4414240] main filter error: Failed to create video converter
[00007f00bd9020d0] main filter error: Failed to create video converter
[00007f00cc047d70] main video output error: Failed to create video converter
[00007f00cc047d70] main video output error: Failed to compensate for the format changes, removing all filters
[00007f00c4004980] gl gl: Initialized libplacebo v2.72.0 (API v72)

A google search told me that a possible solution was to disable hardware acceleration in the Video settings but that didn’t fix my problem. So I took a look at the kernel.log file in /var/log and I got the following error when the program crashed:

Sep 30 21:11:44 StarKnight kernel: [173399.132554] vlc[91472]: segfault at 28000000204 ip 00007f2d8916c1d8 sp 00007f2d8aa69db0 error 4 in libpostproc.so.55.7.100[7f2d8915c000+1d000]
Sep 30 21:11:44 StarKnight kernel: [173399.132568] Code: 98 48 8d 44 07 20 0f 18 08 8b 44 24 08 4d 8d 0c 1a 4d 8d 04 2b 85 c0 0f 85 cb fd ff ff 4c 8b 6c 24 28 4b 8d 04 29 4b 8d 14 20 <41> 0f 6f 01 43 0f 6f 0c 29 41 0f 7f 00 43 0f 7f 0c 20 43 0f 6f 04

Spent about an hour searching for the solution using the details from the kernel.log but got nowhere. Finally I found a forum post where one of the solutions offered was to remove the vlc configuration files, since I didn’t have any other bright idea’s I renamed the vlc config folder by issuing the following command:

mv ~/.config/vlc ~/.config/vlc_09302020

Then I started vlc and just like that everything started working again. 🙂 Not sure what caused the settings to get borked in the first place but the issue is fixed now so all is well.

– Suramya

Older Posts »

Powered by WordPress