Compare commits

...

87 Commits

Author SHA1 Message Date
krateng
3ba27ffc37
Merge pull request #407 from noirscape/dateutil
Add a new config option to use a ZoneInfo timezone.
2025-02-17 16:37:42 +01:00
noirscape
300e2c1ff7 fix and/or mistake 2025-02-15 01:27:26 +01:00
noirscape
5c343053d9 replace utcnow calls
the utcnow function was deprecated in python 3.12 in favor of passing a timezone object into now() instead.
2025-02-15 01:25:21 +01:00
noirscape
7dab61e420 Add a new config option to use a ZoneInfo timezone.
The ZoneInfo class was added in python version 3.9, and the Alpine timezone package is already installed in the Containerfile as part of the runtime dependencies.

zoneinfo provides instances of the same timezone class already used for the existing OFFSET timezones (making it a drop-in accessible option), but implements them using an IANA timezone database (making it historically accurate, and hopefully, future accurate on new builds as well).

The implementation in this commit by default only overrides if LOCATION_TIMEZONE both hasn't been set and is in the installed tzdata package, making it fully backwards compatible with existing options.
2025-02-15 01:09:42 +01:00
krateng
5296960d68
Merge pull request #404 from jackwilsdon/fix-local-permissions
Fix permissions after copying initial local files
2025-02-14 05:20:18 +01:00
Jack Wilsdon
e060241acb Fix permissions after copying initial local files 2025-02-13 19:16:24 +00:00
krateng
c571ffbf07 Version bump 2025-02-10 04:29:15 +01:00
krateng
767a6bca26 Dev scripts 2025-02-10 03:58:14 +01:00
krateng
ffed0c29b0 Update pyproject.toml, GH-399 2025-02-10 03:58:04 +01:00
krateng
ca65813619
Merge pull request #399 from noirscape/patch-1
Read out mimetypes for cached images
2025-02-10 03:50:25 +01:00
noirscape
5926dc3307
its a runtime package 2025-02-09 17:50:17 +01:00
noirscape
811bc16a3f
Add libmagic to containerfile 2025-02-09 17:46:25 +01:00
noirscape
a4ec29dd4c
Add python-magic to requirements.txt 2025-02-09 17:44:49 +01:00
noirscape
a8293063a5
Read out mimetypes for cached images
On Firefox, because there is no mimetype set when serving the file, it is served up as raw HTML, causing cached images to not be served to the client (as the fetch is aborted early). It doesn't appear to be an issue in Google Chrome.

This commit fixes #349 .
2025-02-09 17:38:40 +01:00
krateng
126d155208 Implement import for ghan CSV files, fix GH-382 2025-02-05 19:31:09 +01:00
krateng
7f774f03c4 Include import in normal server startup, GH-393 2025-02-05 18:24:41 +01:00
krateng
cc64c894f0 Update remaining deprecated API URLs, GH-368 2025-02-03 04:04:48 +01:00
krateng
76c013e130 Merge remote-tracking branch 'origin/master' 2025-01-28 07:17:40 +01:00
krateng
9a6c51a36d Update search URL, probably fix GH-368 2025-01-28 07:16:46 +01:00
krateng
e3a578da2f
Merge pull request #395 from RealHypnoticOcelot/patch-1
Fix infinite scrobble loop with Navidrome
2025-01-27 04:36:08 +01:00
HypnoticOcelot
a851e36485
Update listenbrainz.py 2025-01-25 17:08:55 +00:00
HypnoticOcelot
0e928b4007
Update listenbrainz.py 2025-01-24 12:32:05 +00:00
krateng
63386b5ede Python and Alpine upgrade 2025-01-19 19:13:46 +01:00
krateng
9d21800eb9 Remove old setup.py (how was this still here?) 2025-01-19 05:53:34 +01:00
krateng
5a95d4e056 Remove file path context managers, GH-390 2025-01-19 05:53:08 +01:00
krateng
968bea14d9 Update development info 2025-01-19 03:31:54 +01:00
krateng
5e62ccc254 Actually remove install helpers 2025-01-19 03:10:43 +01:00
krateng
273713cdc4 Simpler dev testing with compose 2025-01-19 03:00:00 +01:00
krateng
f8b10ab68c Pin dependencies, close GH-390 2025-01-19 02:51:23 +01:00
krateng
922eae7b68 Remove manual install helpers 2025-01-19 02:36:12 +01:00
krateng
cf0a856040 Remove daemonization capabilities 2025-01-19 02:35:45 +01:00
krateng
26f26f36cb Add debounce timer to search, GH-370 2025-01-16 06:22:35 +01:00
krateng
1462883ab5 Alrighty 2025-01-16 05:07:43 +01:00
krateng
a0b83be095 Let's try this 2025-01-16 04:59:42 +01:00
krateng
2750241e61 Fix build (maybe) 2025-01-15 21:19:39 +01:00
krateng
a7dcd3df8a Version bump 2025-01-15 17:28:17 +01:00
krateng
c6cf28896c Merge remote-tracking branch 'origin/master' 2024-05-05 18:39:26 +02:00
krateng
9efdf90312 Add import for ghan64 last.fm exporter, GH-339 2024-05-05 18:37:46 +02:00
krateng
b35bfdc2e4
Merge pull request #293 from duckfromdiscord/as2.0-xml
make `auth.getMobileSession` return XML
2024-05-05 17:47:27 +02:00
krateng
88bf6d2337
Merge pull request #328 from ThinkChaos/fix/misc
Misc. fixes from first use
2024-05-05 17:39:07 +02:00
krateng
2c2d13e39c Fix casting 2024-05-05 16:51:16 +02:00
krateng
152e3948a1 Fix GH-341 2024-05-05 16:50:51 +02:00
krateng
33ed2abdea Restrict shown artists in cells 2024-03-14 17:51:52 +01:00
krateng
915808a020 Add some type hints 2024-03-14 17:51:28 +01:00
krateng
163746c06e Fix bug in API exception handling 2024-02-24 19:15:51 +01:00
krateng
738f42d49f Design fix 2024-02-20 16:40:07 +01:00
krateng
f4a5c2fb3d Add some DB maintenance 2024-02-20 16:36:57 +01:00
ThinkChaos
a99831d453
fix(log): replace "Data" with "State" to match printed value 2024-02-05 21:31:08 -05:00
ThinkChaos
efd7838b02
fix(conf): don't use cache dir as base for all data dirs
This is a bit tricky but the issue is that in a `for` loop, the loop
variable(s) are shared during the whole iteration. So capturing its
value in the `lambda` as `k=k` doesn't capture the value of the current
iteration, but the value of the last one.
In this code that happened to be `cache`, so all `data_dirs` usage was
ending up with a path under `directory_cache`.
2024-02-05 21:31:08 -05:00
ThinkChaos
a816147e2e
feat: readonly config support read-only filesystem 2024-02-05 21:31:08 -05:00
krateng
ac5c58c919 Cleanup 2024-01-20 20:32:50 +01:00
krateng
c648b25d28 Notification design 2024-01-20 20:14:00 +01:00
krateng
ed34992d8b Better exceptions for existing scrobble timestamps 2024-01-20 20:11:50 +01:00
krateng
386f3c4a41 Exceptional commit 2024-01-20 18:19:34 +01:00
krateng
fbe10930a2 How tf did that happen, Part 2 2024-01-15 15:36:48 +01:00
krateng
a95b2420b2 How tf did that happen 2024-01-07 06:51:07 +01:00
duck
16b977d874 allow json format for authmobile, default to XML 2024-01-03 21:57:42 -05:00
duck
5ec8035cb5
Merge branch 'krateng:master' into as2.0-xml 2024-01-03 21:50:56 -05:00
krateng
259e3b06bb Fix GH-316 2024-01-03 22:03:08 +01:00
krateng
c75bd4fcc3 This should work 2024-01-01 19:15:17 +01:00
krateng
a4ae92e642 Fix start script 2024-01-01 18:56:41 +01:00
krateng
a7dcf6d41d Upgrade base container to 3.19 2024-01-01 18:31:19 +01:00
krateng
6b2f1892f8 Merge branch 'doreahupgrade'
# Conflicts:
#	dev/releases/3.2.yml
2024-01-01 17:52:04 +01:00
krateng
f1c86973c9 Fix incomplete scrobble results with associated artists 2024-01-01 15:46:49 +01:00
krateng
b725c98fa5 Bump requirements 2023-12-28 04:05:15 +01:00
krateng
1f1a65840c Prefer real scrobbles in case of artist chart tie 2023-12-28 03:47:07 +01:00
krateng
436b40821a Return multiple top results for ranges, GH-278 2023-12-28 03:09:25 +01:00
krateng
d160078def Design adjustment 2023-12-28 02:45:46 +01:00
krateng
ea6e27de5c Add feedback for failed scrobble submission, fix GH-297 2023-12-28 02:26:32 +01:00
krateng
472281230c Make Maloja export file recognition more resilient, fix GH-309 2023-12-28 02:05:22 +01:00
krateng
966739e677 Clarified setting, close GH-267 2023-12-28 01:44:46 +01:00
krateng
4c487232c0 Add hint to server setup, close GH-272 2023-12-28 01:39:17 +01:00
krateng
20d8a109d6 Make first scrobble register slightly more efficient, close GH-308 2023-12-28 01:22:40 +01:00
krateng
1ce3119dda Design adjustments 2023-12-27 18:12:57 +01:00
krateng
8e06c34323 Fallback to album art when no track image 2023-12-27 16:11:03 +01:00
krateng
fd4c99f888 Fix GH-311, fix GH-282 2023-12-27 14:36:43 +01:00
duck
f7a9df7446
Merge branch 'krateng:master' into as2.0-xml 2023-12-23 14:16:38 -05:00
krateng
7d6753042f Simplify permission check, GH-313 2023-12-19 18:50:22 +01:00
krateng
7ec5e88bc4 Upgrade auth and logging to new doreah 2023-12-18 05:49:01 +01:00
krateng
3ff92759fb
Merge pull request #304 from Velocidensity/correct-rules
Correct spaces to tabs in predefined rules
2023-12-17 06:16:45 +01:00
krateng
7ccde9cf91
Merge pull request #299 from SirMartin/#296-show-amont-tracks-with-no-album
#296 add extra information about the amount of songs with no album
2023-12-17 06:16:07 +01:00
krateng
bda134a7f7
Merge pull request #305 from timkicker/master
adjust text for newly added album support
2023-12-17 06:15:14 +01:00
timkicker
048dac3186 adjust text for newly added album support 2023-11-27 19:53:30 +01:00
Velocidensity
4a02ee2ba5 Correct spaces to tabs in predefined rules 2023-11-21 04:49:03 +01:00
Eduardo
0d4e8dbc58 #296 add extra information about the amount of songs with no album 2023-11-16 18:35:56 +01:00
duck
be6b796b20 auth.getMobileSession return XML with token provided 2023-11-11 18:14:20 -05:00
duck
7dbd704c5d make auth.getMobileSession return XML 2023-11-11 17:56:40 -05:00
71 changed files with 859 additions and 932 deletions

1
.github/FUNDING.yml vendored
View File

@ -1 +1,2 @@
custom: ["https://paypal.me/krateng"]
patreon: krateng

View File

@ -4,6 +4,7 @@ on:
push:
tags:
- 'v*'
- 'runaction-docker'
jobs:
push_to_registry:

View File

@ -4,11 +4,14 @@ on:
push:
tags:
- 'v*'
- 'runaction-pypi'
jobs:
publish_to_pypi:
name: Push Package to PyPI
runs-on: ubuntu-latest
permissions:
id-token: write
steps:
- name: Check out the repo
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11
@ -25,7 +28,4 @@ jobs:
run: python -m build
- name: Publish to PyPI
uses: pypa/gh-action-pypi-publish@b7f401de30cb6434a1e19f805ff006643653240e
with:
user: __token__
password: ${{ secrets.PYPI_API_TOKEN }}
uses: pypa/gh-action-pypi-publish@67339c736fd9354cd4f8cb0b744f2b82a74b5c70

1
.gitignore vendored
View File

@ -3,7 +3,6 @@
# environments / builds
.venv/*
testdata*
/dist
/build
/*.egg-info

View File

@ -1,36 +0,0 @@
# Contributor: Johannes Krattenmacher <maloja@dev.krateng.ch>
# Maintainer: Johannes Krattenmacher <maloja@dev.krateng.ch>
pkgname=maloja
pkgver=3.0.0-dev
pkgrel=0
pkgdesc="Self-hosted music scrobble database"
url="https://github.com/krateng/maloja"
arch="noarch"
license="GPL-3.0"
depends="python3 tzdata"
pkgusers=$pkgname
pkggroups=$pkgname
depends_dev="gcc g++ python3-dev libxml2-dev libxslt-dev libffi-dev libc-dev py3-pip linux-headers"
makedepends="$depends_dev"
source="
$pkgname-$pkgver.tar.gz::https://github.com/krateng/maloja/archive/refs/tags/v$pkgver.tar.gz
"
builddir="$srcdir"/$pkgname-$pkgver
build() {
cd $builddir
python3 -m build .
pip3 install dist/*.tar.gz
}
package() {
mkdir -p /etc/$pkgname || return 1
mkdir -p /var/lib/$pkgname || return 1
mkdir -p /var/cache/$pkgname || return 1
mkdir -p /var/logs/$pkgname || return 1
}
# TODO
sha512sums="a674eaaaa248fc2b315514d79f9a7a0bac6aa1582fe29554d9176e8b551e8aa3aa75abeebdd7713e9e98cc987e7bd57dc7a5e9a2fb85af98b9c18cb54de47bf7 $pkgname-${pkgver}.tar.gz"

View File

@ -1,4 +1,4 @@
FROM lsiobase/alpine:3.17 as base
FROM lsiobase/alpine:3.21 AS base
WORKDIR /usr/src/app
@ -29,16 +29,19 @@ RUN \
apk add --no-cache \
python3 \
py3-lxml \
libmagic \
tzdata && \
echo "" && \
echo "**** install pip dependencies ****" && \
python3 -m venv /venv && \
. /venv/bin/activate && \
python3 -m ensurepip && \
pip3 install -U --no-cache-dir \
pip install -U --no-cache-dir \
pip \
wheel && \
echo "" && \
echo "**** install maloja requirements ****" && \
pip3 install --no-cache-dir -r requirements.txt && \
pip install --no-cache-dir -r requirements.txt && \
echo "" && \
echo "**** cleanup ****" && \
apk del --purge \
@ -56,6 +59,8 @@ RUN \
echo "**** install maloja ****" && \
apk add --no-cache --virtual=install-deps \
py3-pip && \
python3 -m venv /venv && \
. /venv/bin/activate && \
pip3 install /usr/src/app && \
apk del --purge \
install-deps && \

View File

@ -9,49 +9,14 @@ Clone the repository and enter it.
## Environment
To avoid cluttering your system, consider using a [virtual environment](https://docs.python.org/3/tutorial/venv.html).
Your system needs several packages installed. For supported distributions, this can be done with e.g.
```console
sh ./install/install_dependencies_alpine.sh
```
For other distros, try to find the equivalents of the packages listed or simply check your error output.
Then install all Python dependencies with
```console
pip install -r requirements.txt
```
To avoid cluttering your system, consider using a [virtual environment](https://docs.python.org/3/tutorial/venv.html), or better yet run the included `docker-compose.yml` file.
Your IDE should let you run the file directly, otherwise you can execute `docker compose -f dev/docker-compose.yml -p maloja up --force-recreate --build`.
## Running the server
For development, you might not want to install maloja files all over your filesystem. Use the environment variable `MALOJA_DATA_DIRECTORY` to force all user files into one central directory - this way, you can also quickly change between multiple configurations.
Use the environment variable `MALOJA_DATA_DIRECTORY` to force all user files into one central directory - this way, you can also quickly change between multiple configurations.
You can quickly run the server with all your local changes with
```console
python3 -m maloja run
```
You can also build the package with
```console
pip install .
```
## Docker
You can also always build and run the server with
```console
sh ./dev/run_docker.sh
```
This will use the directory `testdata`.
## Further help

View File

@ -40,15 +40,8 @@ You can check [my own Maloja page](https://maloja.krateng.ch) as an example inst
## How to install
### Requirements
Maloja should run on any x86 or ARM machine that runs Python.
It is highly recommended to use **Docker** or **Podman**.
Your CPU should have a single core passmark score of at the very least 1500. 500 MB RAM should give you a decent experience, but performance will benefit greatly from up to 2 GB.
### Docker / Podman
To avoid issues with version / dependency mismatches, Maloja should only be used in **Docker** or **Podman**, not on bare metal.
I cannot offer any help for bare metal installations (but using venv should help).
Pull the [latest image](https://hub.docker.com/r/krateng/maloja) or check out the repository and use the included Containerfile.
@ -67,11 +60,7 @@ An example of a minimum run configuration to access maloja via `localhost:42010`
docker run -p 42010:42010 -v $PWD/malojadata:/mljdata -e MALOJA_DATA_DIRECTORY=/mljdata krateng/maloja
```
#### Linux Host
**NOTE:** If you are using [rootless containers with Podman](https://developers.redhat.com/blog/2020/09/25/rootless-containers-with-podman-the-basics#why_podman_) this DOES NOT apply to you.
If you are running Docker on a **Linux Host** you should specify `user:group` ids of the user who owns the folder on the host machine bound to `MALOJA_DATA_DIRECTORY` in order to avoid [docker file permission problems.](https://ikriv.com/blog/?p=4698) These can be specified using the [environmental variables **PUID** and **PGID**.](https://docs.linuxserver.io/general/understanding-puid-and-pgid)
If you are using [rootless containers with Podman](https://developers.redhat.com/blog/2020/09/25/rootless-containers-with-podman-the-basics#why_podman_) the following DOES NOT apply to you, but if you are running **Docker** on a **Linux Host** you should specify `user:group` ids of the user who owns the folder on the host machine bound to `MALOJA_DATA_DIRECTORY` in order to avoid [docker file permission problems.](https://ikriv.com/blog/?p=4698) These can be specified using the [environmental variables **PUID** and **PGID**.](https://docs.linuxserver.io/general/understanding-puid-and-pgid)
To get the UID and GID for the current user run these commands from a terminal:
@ -84,33 +73,6 @@ The modified run command with these variables would look like:
docker run -e PUID=1000 -e PGID=1001 -p 42010:42010 -v $PWD/malojadata:/mljdata -e MALOJA_DATA_DIRECTORY=/mljdata krateng/maloja
```
### PyPI
You can install Maloja with
```console
pip install malojaserver
```
To make sure all dependencies are installed, you can also use one of the included scripts in the `install` folder.
### From Source
Clone this repository and enter the directory with
```console
git clone https://github.com/krateng/maloja
cd maloja
```
Then install all the requirements and build the package, e.g.:
```console
sh ./install/install_dependencies_alpine.sh
pip install -r requirements.txt
pip install .
```
### Extras
@ -123,30 +85,18 @@ Then install all the requirements and build the package, e.g.:
### Basic control
When not running in a container, you can run the application with `maloja run`. You can also run it in the background with
`maloja start` and `maloja stop`, but this might not be supported in the future.
When not running in a container, you can run the application with `maloja run`.
### Data
If you would like to import your previous scrobbles, use the command `maloja import *filename*`. This works on:
If you would like to import your previous scrobbles, copy them into the import folder in your data directory. This works on:
* a Last.fm export generated by [benfoxall's website](https://benjaminbenben.com/lastfm-to-csv/) ([GitHub page](https://github.com/benfoxall/lastfm-to-csv))
* a Last.fm export generated by [ghan64's website](https://lastfm.ghan.nl/export/)
* an official [Spotify data export file](https://www.spotify.com/us/account/privacy/)
* an official [ListenBrainz export file](https://listenbrainz.org/profile/export/)
* the export of another Maloja instance
⚠️ Never import your data while maloja is running. When you need to do import inside docker container start it in shell mode instead and perform import before starting the container as mentioned above.
```console
docker run -it --entrypoint sh -v $PWD/malojadata:/mljdata -e MALOJA_DATA_DIRECTORY=/mljdata krateng/maloja
cd /mljdata
maloja import my_last_fm_export.csv
```
To backup your data, run `maloja backup`, optional with `--include_images`.
### Customization
* Have a look at the [available settings](settings.md) and specifiy your choices in `/etc/maloja/settings.ini`. You can also set each of these settings as an environment variable with the prefix `MALOJA_` (e.g. `MALOJA_SKIP_SETUP`).

View File

@ -4,4 +4,4 @@
echo -e "\nMaloja is starting!"
exec \
s6-setuidgid abc python -m maloja run
s6-setuidgid abc /venv/bin/python -m maloja run

3
dev/clear_testdata.sh Normal file
View File

@ -0,0 +1,3 @@
sudo rm -r ./testdata
mkdir ./testdata
chmod 777 ./testdata

13
dev/docker-compose.yml Normal file
View File

@ -0,0 +1,13 @@
services:
maloja:
build:
context: ..
dockerfile: ./Containerfile
ports:
- "42010:42010"
volumes:
- "./testdata:/data"
environment:
- "MALOJA_DATA_DIRECTORY=/data"
- "PUID=1000"
- "PGID=1000"

View File

@ -1,21 +0,0 @@
import toml
import os
with open("pyproject.toml") as filed:
data = toml.load(filed)
info = {
'name':data['project']['name'],
'license':"GPLv3",
'version':data['project']['version'],
'architecture':'all',
'description':'"' + data['project']['description'] + '"',
'url':'"' + data['project']['urls']['homepage'] + '"',
'maintainer':f"\"{data['project']['authors'][0]['name']} <{data['project']['authors'][0]['email']}>\"",
}
for target in ["apk","deb"]:
lcmd = f"fpm {' '.join(f'--{key} {info[key]}' for key in info)} -s python -t {target} . "
print(lcmd)
os.system(lcmd)

View File

@ -32,8 +32,26 @@ minor_release_name: "Nicole"
- "[Bugfix] Fixed Spotify authentication thread blocking the process from terminating"
- "[Technical] Upgraded all third party modules to use requests module and send User Agent"
3.2.2:
commit: "febaff97228b37a192f2630aa331cac5e5c3e98e"
notes:
- "[Security] Fixed XSS vulnerability in error page (Disclosed by https://github.com/NULLYUKI)"
- "[Architecture] Reworked the default directory selection"
- "[Feature] Added option to show scrobbles on tile charts"
- "[Bugfix] Fixed Last.fm authentication"
- "[Bugfix] Fixed Last.fm authentication"
3.2.3:
commit: "a7dcd3df8a6b051a1f6d0b7d10cc5af83502445c"
notes:
- "[Architecture] Upgraded doreah, significant rework of authentication"
- "[Bugfix] Fixed initial permission check"
- "[Bugfix] Fixed and updated various texts"
- "[Bugfix] Fixed moving tracks to different album"
3.2.4:
notes:
- "[Architecture] Removed daemonization capabilities"
- "[Architecture] Moved import to main server process"
- "[Feature] Implemented support for ghan's csv Last.fm export"
- "[Performance] Debounced search"
- "[Bugfix] Fixed stuck scrobbling from Navidrome"
- "[Bugfix] Fixed missing image mimetype"
- "[Technical] Pinned dependencies"
- "[Technical] Upgraded Python and Alpine"

View File

@ -1,2 +0,0 @@
docker build -t maloja . -f Containerfile
docker run --rm -p 42010:42010 -v $PWD/testdata:/mlj -e MALOJA_DATA_DIRECTORY=/mlj maloja

View File

@ -1,2 +0,0 @@
podman build -t maloja . -f Containerfile
podman run --rm -p 42010:42010 -v $PWD/testdata:/mlj -e MALOJA_DATA_DIRECTORY=/mlj maloja

View File

@ -1,36 +0,0 @@
# Contributor: Johannes Krattenmacher <maloja@dev.krateng.ch>
# Maintainer: Johannes Krattenmacher <maloja@dev.krateng.ch>
pkgname={{ tool.flit.module.name }}
pkgver={{ project.version }}
pkgrel=0
pkgdesc="{{ project.description }}"
url="{{ project.urls.homepage }}"
arch="noarch"
license="GPL-3.0"
depends="{{ tool.osreqs.alpine.run | join(' ') }}"
pkgusers=$pkgname
pkggroups=$pkgname
depends_dev="{{ tool.osreqs.alpine.build | join(' ') }}"
makedepends="$depends_dev"
source="
$pkgname-$pkgver.tar.gz::{{ project.urls.repository }}/archive/refs/tags/v$pkgver.tar.gz
"
builddir="$srcdir"/$pkgname-$pkgver
build() {
cd $builddir
python3 -m build .
pip3 install dist/*.tar.gz
}
package() {
mkdir -p /etc/$pkgname || return 1
mkdir -p /var/lib/$pkgname || return 1
mkdir -p /var/cache/$pkgname || return 1
mkdir -p /var/logs/$pkgname || return 1
}
# TODO
sha512sums="a674eaaaa248fc2b315514d79f9a7a0bac6aa1582fe29554d9176e8b551e8aa3aa75abeebdd7713e9e98cc987e7bd57dc7a5e9a2fb85af98b9c18cb54de47bf7 $pkgname-${pkgver}.tar.gz"

View File

@ -1,40 +0,0 @@
FROM alpine:3.15
# Python image includes two Python versions, so use base Alpine
# Based on the work of Jonathan Boeckel <jonathanboeckel1996@gmail.com>
WORKDIR /usr/src/app
# Install run dependencies first
RUN apk add --no-cache {{ tool.osreqs.alpine.run | join(' ') }}
# system pip could be removed after build, but apk then decides to also remove all its
# python dependencies, even if they are explicitly installed as python packages
# whut
RUN \
apk add py3-pip && \
pip install wheel
COPY ./requirements.txt ./requirements.txt
RUN \
apk add --no-cache --virtual .build-deps {{ tool.osreqs.alpine.build | join(' ') }} && \
pip install --no-cache-dir -r requirements.txt && \
apk del .build-deps
# no chance for caching below here
COPY . .
RUN pip install /usr/src/app
# Docker-specific configuration
# defaulting to IPv4 is no longer necessary (default host is dual stack)
ENV MALOJA_SKIP_SETUP=yes
ENV PYTHONUNBUFFERED=1
EXPOSE 42010
# use exec form for better signal handling https://docs.docker.com/engine/reference/builder/#entrypoint
ENTRYPOINT ["maloja", "run"]

View File

@ -1,4 +0,0 @@
{% include 'install/install_dependencies_alpine.sh.jinja' %}
apk add py3-pip
pip install wheel
pip install malojaserver

View File

@ -1,4 +0,0 @@
{% include 'install/install_dependencies_debian.sh.jinja' %}
apt install python3-pip
pip install wheel
pip install malojaserver

View File

@ -1,4 +0,0 @@
#!/usr/bin/env sh
apk update
apk add \
{{ (tool.osreqs.alpine.build + tool.osreqs.alpine.run + tool.osreqs.alpine.opt) | join(' \\\n\t') }}

View File

@ -1,4 +0,0 @@
#!/usr/bin/env sh
apt update
apt install \
{{ (tool.osreqs.debian.build + tool.osreqs.debian.run + tool.osreqs.debian.opt) | join(' \\\n\t') }}

View File

@ -1,17 +1,21 @@
"""
Create necessary files from sources of truth. Currently just the requirements.txt files.
"""
import toml
import os
import jinja2
env = jinja2.Environment(
loader=jinja2.FileSystemLoader('dev/templates'),
loader=jinja2.FileSystemLoader('./templates'),
autoescape=jinja2.select_autoescape(['html', 'xml']),
keep_trailing_newline=True
)
with open("pyproject.toml") as filed:
with open("../pyproject.toml") as filed:
data = toml.load(filed)
templatedir = "./dev/templates"
templatedir = "./templates"
for root,dirs,files in os.walk(templatedir):
@ -23,7 +27,7 @@ for root,dirs,files in os.walk(templatedir):
if not f.endswith('.jinja'): continue
srcfile = os.path.join(root,f)
trgfile = os.path.join(reldirpath,f.replace(".jinja",""))
trgfile = os.path.join("..", reldirpath,f.replace(".jinja",""))
template = env.get_template(relfilepath)

View File

@ -1,3 +1,7 @@
"""
Read the changelogs / version metadata and create all git tags
"""
import os
import subprocess as sp
import yaml

View File

@ -1,20 +0,0 @@
#!/usr/bin/env sh
apk update
apk add \
gcc \
g++ \
python3-dev \
libxml2-dev \
libxslt-dev \
libffi-dev \
libc-dev \
py3-pip \
linux-headers \
python3 \
py3-lxml \
tzdata \
vips
apk add py3-pip
pip install wheel
pip install malojaserver

View File

@ -1,9 +0,0 @@
#!/usr/bin/env sh
apt update
apt install \
python3-pip \
python3
apt install python3-pip
pip install wheel
pip install malojaserver

View File

@ -1,16 +0,0 @@
#!/usr/bin/env sh
apk update
apk add \
gcc \
g++ \
python3-dev \
libxml2-dev \
libxslt-dev \
libffi-dev \
libc-dev \
py3-pip \
linux-headers \
python3 \
py3-lxml \
tzdata \
vips

View File

@ -1,15 +0,0 @@
#!/usr/bin/env sh
pacman -Syu
pacman -S --needed \
gcc \
python3 \
libxml2 \
libxslt \
libffi \
glibc \
python-pip \
linux-headers \
python \
python-lxml \
tzdata \
libvips

View File

@ -1,5 +0,0 @@
#!/usr/bin/env sh
apt update
apt install \
python3-pip \
python3

View File

@ -26,77 +26,6 @@ def print_header_info():
#print("#####")
print()
def get_instance():
try:
return int(subprocess.check_output(["pgrep","-f","maloja$"]))
except Exception:
return None
def get_instance_supervisor():
try:
return int(subprocess.check_output(["pgrep","-f","maloja_supervisor"]))
except Exception:
return None
def restart():
if stop():
start()
else:
print(col["red"]("Could not stop Maloja!"))
def start():
if get_instance_supervisor() is not None:
print("Maloja is already running.")
else:
print_header_info()
setup()
try:
#p = subprocess.Popen(["python3","-m","maloja.server"],stdout=subprocess.DEVNULL,stderr=subprocess.DEVNULL)
sp = subprocess.Popen(["python3","-m","maloja","supervisor"],stdout=subprocess.DEVNULL,stderr=subprocess.DEVNULL)
print(col["green"]("Maloja started!"))
port = conf.malojaconfig["PORT"]
print("Visit your server address (Port " + str(port) + ") to see your web interface. Visit /admin_setup to get started.")
print("If you're installing this on your local machine, these links should get you there:")
print("\t" + col["blue"]("http://localhost:" + str(port)))
print("\t" + col["blue"]("http://localhost:" + str(port) + "/admin_setup"))
return True
except Exception:
print("Error while starting Maloja.")
return False
def stop():
for attempt in [(signal.SIGTERM,2),(signal.SIGTERM,5),(signal.SIGKILL,3),(signal.SIGKILL,5)]:
pid_sv = get_instance_supervisor()
pid = get_instance()
if pid is None and pid_sv is None:
print("Maloja stopped!")
return True
if pid_sv is not None:
os.kill(pid_sv,attempt[0])
if pid is not None:
os.kill(pid,attempt[0])
time.sleep(attempt[1])
return False
print("Maloja stopped!")
return True
def onlysetup():
print_header_info()
setup()
@ -109,24 +38,6 @@ def run_server():
from . import server
server.run_server()
def run_supervisor():
setproctitle("maloja_supervisor")
while True:
log("Maloja is not running, starting...",module="supervisor")
try:
process = subprocess.Popen(
["python3", "-m", "maloja","run"],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
except Exception as e:
log("Error starting Maloja: " + str(e),module="supervisor")
else:
try:
process.wait()
except Exception as e:
log("Maloja crashed: " + str(e),module="supervisor")
def debug():
os.environ["MALOJA_DEV_MODE"] = 'true'
conf.malojaconfig.load_environment()
@ -135,10 +46,11 @@ def debug():
def print_info():
print_header_info()
print(col['lightblue']("Configuration Directory:"),conf.dir_settings['config'])
print(col['lightblue']("Data Directory: "),conf.dir_settings['state'])
print(col['lightblue']("State Directory: "),conf.dir_settings['state'])
print(col['lightblue']("Log Directory: "),conf.dir_settings['logs'])
print(col['lightblue']("Network: "),f"Dual Stack, Port {conf.malojaconfig['port']}" if conf.malojaconfig['host'] == "*" else f"IPv{ip_address(conf.malojaconfig['host']).version}, Port {conf.malojaconfig['port']}")
print(col['lightblue']("Timezone: "),f"UTC{conf.malojaconfig['timezone']:+d}")
print(col['lightblue']("Location Timezone: "),conf.malojaconfig['location_timezone'])
print()
try:
from importlib.metadata import distribution
@ -173,11 +85,7 @@ def main(*args,**kwargs):
actions = {
# server
"start":start,
"restart":restart,
"stop":stop,
"run":run_server,
"supervisor":run_supervisor,
"debug":debug,
"setup":onlysetup,
# admin scripts

View File

@ -4,7 +4,7 @@
# you know what f*ck it
# this is hardcoded for now because of that damn project / package name discrepancy
# i'll fix it one day
VERSION = "3.2.2"
VERSION = "3.2.4"
HOMEPAGE = "https://github.com/krateng/maloja"

View File

@ -25,9 +25,20 @@ __logmodulename__ = "apis"
cla = CleanerAgent()
# wrapper method: calls handle. final net to catch exceptions and map them to the handlers proper json / xml response
# handle method: finds the method for this path / query. can only raise InvalidMethodException
# scrobble: NOT the exposed scrobble method - helper for all APIs to scrobble their results with self-identification
class APIHandler:
__apiname__: str
errors: dict
# make these classes singletons
_instance = None
def __new__(cls, *args, **kwargs):
if not isinstance(cls._instance, cls):
cls._instance = object.__new__(cls, *args, **kwargs)
@ -62,37 +73,33 @@ class APIHandler:
try:
response.status,result = self.handle(path,keys)
except Exception:
exceptiontype = sys.exc_info()[0]
if exceptiontype in self.errors:
response.status,result = self.errors[exceptiontype]
log(f"Error with {self.__apiname__} API: {exceptiontype} (Request: {path})")
except Exception as e:
for exc_type, exc_response in self.errors.items():
if isinstance(e, exc_type):
response.status, result = exc_response
log(f"Error with {self.__apiname__} API: {e} (Request: {path})")
break
else:
response.status,result = 500,{"status":"Unknown error","code":500}
log(f"Unhandled Exception with {self.__apiname__} API: {exceptiontype} (Request: {path})")
# THIS SHOULD NOT HAPPEN
response.status, result = 500, {"status": "Unknown error", "code": 500}
log(f"Unhandled Exception with {self.__apiname__} API: {e} (Request: {path})")
return result
#else:
# result = {"error":"Invalid scrobble protocol"}
# response.status = 500
def handle(self,path,keys):
try:
methodname = self.get_method(path,keys)
methodname = self.get_method(path, keys)
method = self.methods[methodname]
except Exception:
log("Could not find a handler for method " + str(methodname) + " in API " + self.__apiname__,module="debug")
log("Keys: " + str(keys),module="debug")
except KeyError:
log(f"Could not find a handler for method {methodname} in API {self.__apiname__}", module="debug")
log(f"Keys: {keys}", module="debug")
raise InvalidMethodException()
return method(path,keys)
return method(path, keys)
def scrobble(self,rawscrobble,client=None):
# fixing etc is handled by the main scrobble function
try:
return database.incoming_scrobble(rawscrobble,api=self.__apiname__,client=client)
except Exception:
raise ScrobblingException()
return database.incoming_scrobble(rawscrobble,api=self.__apiname__,client=client)

View File

@ -3,4 +3,4 @@ class InvalidAuthException(Exception): pass
class InvalidMethodException(Exception): pass
class InvalidSessionKey(Exception): pass
class MalformedJSONException(Exception): pass
class ScrobblingException(Exception): pass

View File

@ -21,13 +21,22 @@ class Audioscrobbler(APIHandler):
"track.scrobble":self.submit_scrobble
}
self.errors = {
BadAuthException:(400,{"error":6,"message":"Requires authentication"}),
InvalidAuthException:(401,{"error":4,"message":"Invalid credentials"}),
InvalidMethodException:(200,{"error":3,"message":"Invalid method"}),
InvalidSessionKey:(403,{"error":9,"message":"Invalid session key"}),
ScrobblingException:(500,{"error":8,"message":"Operation failed"})
BadAuthException: (400, {"error": 6, "message": "Requires authentication"}),
InvalidAuthException: (401, {"error": 4, "message": "Invalid credentials"}),
InvalidMethodException: (200, {"error": 3, "message": "Invalid method"}),
InvalidSessionKey: (403, {"error": 9, "message": "Invalid session key"}),
Exception: (500, {"error": 8, "message": "Operation failed"})
}
# xml string escaping: https://stackoverflow.com/a/28703510
def xml_escape(self, str_xml: str):
str_xml = str_xml.replace("&", "&amp;")
str_xml = str_xml.replace("<", "&lt;")
str_xml = str_xml.replace("<", "&lt;")
str_xml = str_xml.replace("\"", "&quot;")
str_xml = str_xml.replace("'", "&apos;")
return str_xml
def get_method(self,pathnodes,keys):
return keys.get("method")
@ -45,12 +54,22 @@ class Audioscrobbler(APIHandler):
token = keys.get("authToken")
user = keys.get("username")
password = keys.get("password")
format = keys.get("format") or "xml" # Audioscrobbler 2.0 uses XML by default
# either username and password
if user is not None and password is not None:
client = apikeystore.check_and_identify_key(password)
if client:
sessionkey = self.generate_key(client)
return 200,{"session":{"key":sessionkey}}
if format == "json":
return 200,{"session":{"key":sessionkey}}
else:
return 200,"""<lfm status="ok">
<session>
<name>%s</name>
<key>%s</key>
<subscriber>0</subscriber>
</session>
</lfm>""" % (self.xml_escape(user), self.xml_escape(sessionkey))
else:
raise InvalidAuthException()
# or username and token (deprecated by lastfm)
@ -59,7 +78,16 @@ class Audioscrobbler(APIHandler):
key = apikeystore[client]
if md5(user + md5(key)) == token:
sessionkey = self.generate_key(client)
return 200,{"session":{"key":sessionkey}}
if format == "json":
return 200,{"session":{"key":sessionkey}}
else:
return 200,"""<lfm status="ok">
<session>
<name>%s</name>
<key>%s</key>
<subscriber>0</subscriber>
</session>
</lfm>""" % (self.xml_escape(user), self.xml_escape(sessionkey))
raise InvalidAuthException()
else:
raise BadAuthException()

View File

@ -23,11 +23,11 @@ class AudioscrobblerLegacy(APIHandler):
"scrobble":self.submit_scrobble
}
self.errors = {
BadAuthException:(403,"BADAUTH\n"),
InvalidAuthException:(403,"BADAUTH\n"),
InvalidMethodException:(400,"FAILED\n"),
InvalidSessionKey:(403,"BADSESSION\n"),
ScrobblingException:(500,"FAILED\n")
BadAuthException: (403, "BADAUTH\n"),
InvalidAuthException: (403, "BADAUTH\n"),
InvalidMethodException: (400, "FAILED\n"),
InvalidSessionKey: (403, "BADSESSION\n"),
Exception: (500, "FAILED\n")
}
def get_method(self,pathnodes,keys):

View File

@ -3,6 +3,7 @@ from ._exceptions import *
from .. import database
import datetime
from ._apikeys import apikeystore
from ..database.exceptions import DuplicateScrobble, DuplicateTimestamp
from ..pkg_global.conf import malojaconfig
@ -21,11 +22,13 @@ class Listenbrainz(APIHandler):
"validate-token":self.validate_token
}
self.errors = {
BadAuthException:(401,{"code":401,"error":"You need to provide an Authorization header."}),
InvalidAuthException:(401,{"code":401,"error":"Incorrect Authorization"}),
InvalidMethodException:(200,{"code":200,"error":"Invalid Method"}),
MalformedJSONException:(400,{"code":400,"error":"Invalid JSON document submitted."}),
ScrobblingException:(500,{"code":500,"error":"Unspecified server error."})
BadAuthException: (401, {"code": 401, "error": "You need to provide an Authorization header."}),
InvalidAuthException: (401, {"code": 401, "error": "Incorrect Authorization"}),
InvalidMethodException: (200, {"code": 200, "error": "Invalid Method"}),
MalformedJSONException: (400, {"code": 400, "error": "Invalid JSON document submitted."}),
DuplicateScrobble: (200, {"status": "ok"}),
DuplicateTimestamp: (409, {"error": "Scrobble with the same timestamp already exists."}),
Exception: (500, {"code": 500, "error": "Unspecified server error."})
}
def get_method(self,pathnodes,keys):

View File

@ -7,7 +7,6 @@ from bottle import response, static_file, FormsDict
from inspect import signature
from doreah.logging import log
from doreah.auth import authenticated_function
# nimrodel API
from nimrodel import EAPI as API
@ -15,7 +14,7 @@ from nimrodel import Multi
from .. import database
from ..pkg_global.conf import malojaconfig, data_dir
from ..pkg_global.conf import malojaconfig, data_dir, auth
@ -82,6 +81,24 @@ errors = {
'desc':"This entity does not exist in the database."
}
}),
database.exceptions.DuplicateTimestamp: lambda e: (409,{
"status":"error",
"error":{
'type':'duplicate_timestamp',
'value':e.rejected_scrobble,
'desc':"A scrobble is already registered with this timestamp."
}
}),
database.exceptions.DuplicateScrobble: lambda e: (200,{
"status": "success",
"desc": "The scrobble is present in the database.",
"track": {},
"warnings": [{
'type': 'scrobble_exists',
'value': None,
'desc': 'This scrobble exists in the database (same timestamp and track). The submitted scrobble was not added.'
}]
}),
images.MalformedB64: lambda e: (400,{
"status":"failure",
"error":{
@ -474,7 +491,7 @@ def get_top_artists_external(k_filter, k_limit, k_delimit, k_amount):
:rtype: Dictionary"""
ckeys = {**k_limit, **k_delimit}
results = database.get_top_artists(**ckeys)
results = database.get_top_artists(**ckeys,compatibility=True)
return {
"status":"ok",
@ -493,7 +510,7 @@ def get_top_tracks_external(k_filter, k_limit, k_delimit, k_amount):
:rtype: Dictionary"""
ckeys = {**k_limit, **k_delimit}
results = database.get_top_tracks(**ckeys)
results = database.get_top_tracks(**ckeys,compatibility=True)
# IMPLEMENT THIS FOR TOP TRACKS OF ARTIST/ALBUM AS WELL?
return {
@ -513,7 +530,7 @@ def get_top_albums_external(k_filter, k_limit, k_delimit, k_amount):
:rtype: Dictionary"""
ckeys = {**k_limit, **k_delimit}
results = database.get_top_albums(**ckeys)
results = database.get_top_albums(**ckeys,compatibility=True)
# IMPLEMENT THIS FOR TOP ALBUMS OF ARTIST AS WELL?
return {
@ -567,7 +584,7 @@ def album_info_external(k_filter, k_limit, k_delimit, k_amount):
@api.post("newscrobble")
@authenticated_function(alternate=api_key_correct,api=True,pass_auth_result_as='auth_result')
@auth.authenticated_function(alternate=api_key_correct,api=True,pass_auth_result_as='auth_result')
@catch_exceptions
def post_scrobble(
artist:Multi=None,
@ -647,7 +664,7 @@ def post_scrobble(
@api.post("addpicture")
@authenticated_function(alternate=api_key_correct,api=True)
@auth.authenticated_function(alternate=api_key_correct,api=True)
@catch_exceptions
@convert_kwargs
def add_picture(k_filter, k_limit, k_delimit, k_amount, k_special):
@ -670,7 +687,7 @@ def add_picture(k_filter, k_limit, k_delimit, k_amount, k_special):
@api.post("importrules")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def import_rulemodule(**keys):
"""Internal Use Only"""
@ -689,7 +706,7 @@ def import_rulemodule(**keys):
@api.post("rebuild")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def rebuild(**keys):
"""Internal Use Only"""
@ -765,7 +782,7 @@ def search(**keys):
@api.post("newrule")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def newrule(**keys):
"""Internal Use Only"""
@ -776,21 +793,21 @@ def newrule(**keys):
@api.post("settings")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def set_settings(**keys):
"""Internal Use Only"""
malojaconfig.update(keys)
@api.post("apikeys")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def set_apikeys(**keys):
"""Internal Use Only"""
apikeystore.update(keys)
@api.post("import")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def import_scrobbles(identifier):
"""Internal Use Only"""
@ -798,7 +815,7 @@ def import_scrobbles(identifier):
import_scrobbles(identifier)
@api.get("backup")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def get_backup(**keys):
"""Internal Use Only"""
@ -811,7 +828,7 @@ def get_backup(**keys):
return static_file(os.path.basename(archivefile),root=tmpfolder)
@api.get("export")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def get_export(**keys):
"""Internal Use Only"""
@ -825,7 +842,7 @@ def get_export(**keys):
@api.post("delete_scrobble")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def delete_scrobble(timestamp):
"""Internal Use Only"""
@ -837,7 +854,7 @@ def delete_scrobble(timestamp):
@api.post("edit_artist")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def edit_artist(id,name):
"""Internal Use Only"""
@ -847,7 +864,7 @@ def edit_artist(id,name):
}
@api.post("edit_track")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def edit_track(id,title):
"""Internal Use Only"""
@ -857,7 +874,7 @@ def edit_track(id,title):
}
@api.post("edit_album")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def edit_album(id,albumtitle):
"""Internal Use Only"""
@ -868,7 +885,7 @@ def edit_album(id,albumtitle):
@api.post("merge_tracks")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def merge_tracks(target_id,source_ids):
"""Internal Use Only"""
@ -879,7 +896,7 @@ def merge_tracks(target_id,source_ids):
}
@api.post("merge_artists")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def merge_artists(target_id,source_ids):
"""Internal Use Only"""
@ -890,7 +907,7 @@ def merge_artists(target_id,source_ids):
}
@api.post("merge_albums")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def merge_artists(target_id,source_ids):
"""Internal Use Only"""
@ -901,7 +918,7 @@ def merge_artists(target_id,source_ids):
}
@api.post("associate_albums_to_artist")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def associate_albums_to_artist(target_id,source_ids,remove=False):
result = database.associate_albums_to_artist(target_id,source_ids,remove=remove)
@ -913,7 +930,7 @@ def associate_albums_to_artist(target_id,source_ids,remove=False):
}
@api.post("associate_tracks_to_artist")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def associate_tracks_to_artist(target_id,source_ids,remove=False):
result = database.associate_tracks_to_artist(target_id,source_ids,remove=remove)
@ -925,7 +942,7 @@ def associate_tracks_to_artist(target_id,source_ids,remove=False):
}
@api.post("associate_tracks_to_album")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def associate_tracks_to_album(target_id,source_ids):
result = database.associate_tracks_to_album(target_id,source_ids)
@ -937,7 +954,7 @@ def associate_tracks_to_album(target_id,source_ids):
@api.post("reparse_scrobble")
@authenticated_function(api=True)
@auth.authenticated_function(api=True)
@catch_exceptions
def reparse_scrobble(timestamp):
"""Internal Use Only"""

View File

@ -15,13 +15,15 @@ class CleanerAgent:
def updateRules(self):
rawrules = []
for f in os.listdir(data_dir["rules"]()):
if f.split('.')[-1].lower() != 'tsv': continue
filepath = data_dir["rules"](f)
with open(filepath,'r') as filed:
reader = csv.reader(filed,delimiter="\t")
rawrules += [[col for col in entry if col] for entry in reader if len(entry)>0 and not entry[0].startswith('#')]
try:
for f in os.listdir(data_dir["rules"]()):
if f.split('.')[-1].lower() != 'tsv': continue
filepath = data_dir["rules"](f)
with open(filepath,'r') as filed:
reader = csv.reader(filed,delimiter="\t")
rawrules += [[col for col in entry if col] for entry in reader if len(entry)>0 and not entry[0].startswith('#')]
except FileNotFoundError:
pass
self.rules_belongtogether = [r[1] for r in rawrules if r[0]=="belongtogether"]
self.rules_notanartist = [r[1] for r in rawrules if r[0]=="notanartist"]

View File

@ -160,8 +160,8 @@ replaceartist 여자친구 GFriend GFriend
# Girl's Generation
replaceartist 소녀시대 Girls' Generation
replaceartist SNSD Girls' Generation
replaceartist Girls' Generation-TTS TaeTiSeo
countas TaeTiSeo Girls' Generation
replaceartist Girls' Generation-TTS TaeTiSeo
countas TaeTiSeo Girls' Generation
# Apink
replaceartist A Pink Apink

Can't render this file because it has a wrong number of fields in line 5.

View File

@ -1,6 +1,8 @@
# server
from bottle import request, response, FormsDict
from ..pkg_global import conf
# decorator that makes sure this function is only run in normal operation,
# not when we run a task that needs to access the database
@ -27,7 +29,6 @@ from . import exceptions
# doreah toolkit
from doreah.logging import log
from doreah.auth import authenticated_api, authenticated_api_with_alternate
import doreah
@ -42,6 +43,7 @@ from collections import namedtuple
from threading import Lock
import yaml, json
import math
from itertools import takewhile
# url handling
import urllib
@ -318,7 +320,7 @@ def associate_tracks_to_album(target_id,source_ids):
if target_id:
target = sqldb.get_album(target_id)
log(f"Adding {sources} into {target}")
sqldb.add_tracks_to_albums({src:target_id for src in source_ids})
sqldb.add_tracks_to_albums({src:target_id for src in source_ids},replace=True)
else:
sqldb.remove_album(source_ids)
result = {'sources':sources,'target':target}
@ -444,10 +446,11 @@ def get_charts_albums(dbconn=None,resolve_ids=True,only_own_albums=False,**keys)
(since,to) = keys.get('timerange').timestamps()
if 'artist' in keys:
result = sqldb.count_scrobbles_by_album_combined(since=since,to=to,artist=keys['artist'],associated=keys.get('associated',False),resolve_ids=resolve_ids,dbconn=dbconn)
artist = sqldb.get_artist(sqldb.get_artist_id(keys['artist']))
result = sqldb.count_scrobbles_by_album_combined(since=since,to=to,artist=artist,associated=keys.get('associated',False),resolve_ids=resolve_ids,dbconn=dbconn)
if only_own_albums:
# TODO: this doesnt take associated into account and doesnt change ranks
result = [e for e in result if keys['artist'] in (e['album']['artists'] or [])]
result = [e for e in result if artist in (e['album']['artists'] or [])]
else:
result = sqldb.count_scrobbles_by_album(since=since,to=to,resolve_ids=resolve_ids,dbconn=dbconn)
return result
@ -570,7 +573,7 @@ def get_performance(dbconn=None,**keys):
return results
@waitfordb
def get_top_artists(dbconn=None,**keys):
def get_top_artists(dbconn=None,compatibility=True,**keys):
separate = keys.get('separate')
@ -578,42 +581,73 @@ def get_top_artists(dbconn=None,**keys):
results = []
for rng in rngs:
try:
res = get_charts_artists(timerange=rng,separate=separate,dbconn=dbconn)[0]
results.append({"range":rng,"artist":res["artist"],"scrobbles":res["scrobbles"],"real_scrobbles":res["real_scrobbles"],"associated_artists":sqldb.get_associated_artists(res["artist"])})
except Exception:
results.append({"range":rng,"artist":None,"scrobbles":0,"real_scrobbles":0})
result = {'range':rng}
res = get_charts_artists(timerange=rng,separate=separate,dbconn=dbconn)
result['top'] = [
{'artist': r['artist'], 'scrobbles': r['scrobbles'], 'real_scrobbles':r['real_scrobbles'], 'associated_artists': sqldb.get_associated_artists(r['artist'])}
for r in takewhile(lambda x:x['rank']==1,res)
]
# for third party applications
if compatibility:
if result['top']:
result.update(result['top'][0])
else:
result.update({'artist':None,'scrobbles':0,'real_scrobbles':0})
results.append(result)
return results
@waitfordb
def get_top_tracks(dbconn=None,**keys):
def get_top_tracks(dbconn=None,compatibility=True,**keys):
rngs = ranges(**{k:keys[k] for k in keys if k in ["since","to","within","timerange","step","stepn","trail"]})
results = []
for rng in rngs:
try:
res = get_charts_tracks(timerange=rng,dbconn=dbconn)[0]
results.append({"range":rng,"track":res["track"],"scrobbles":res["scrobbles"]})
except Exception:
results.append({"range":rng,"track":None,"scrobbles":0})
result = {'range':rng}
res = get_charts_tracks(timerange=rng,dbconn=dbconn)
result['top'] = [
{'track': r['track'], 'scrobbles': r['scrobbles']}
for r in takewhile(lambda x:x['rank']==1,res)
]
# for third party applications
if compatibility:
if result['top']:
result.update(result['top'][0])
else:
result.update({'track':None,'scrobbles':0})
results.append(result)
return results
@waitfordb
def get_top_albums(dbconn=None,**keys):
def get_top_albums(dbconn=None,compatibility=True,**keys):
rngs = ranges(**{k:keys[k] for k in keys if k in ["since","to","within","timerange","step","stepn","trail"]})
results = []
for rng in rngs:
try:
res = get_charts_albums(timerange=rng,dbconn=dbconn)[0]
results.append({"range":rng,"album":res["album"],"scrobbles":res["scrobbles"]})
except Exception:
results.append({"range":rng,"album":None,"scrobbles":0})
result = {'range':rng}
res = get_charts_albums(timerange=rng,dbconn=dbconn)
result['top'] = [
{'album': r['album'], 'scrobbles': r['scrobbles']}
for r in takewhile(lambda x:x['rank']==1,res)
]
# for third party applications
if compatibility:
if result['top']:
result.update(result['top'][0])
else:
result.update({'album':None,'scrobbles':0})
results.append(result)
return results
@ -900,6 +934,9 @@ def get_predefined_rulesets(dbconn=None):
def start_db():
conf.AUX_MODE = True # that is, without a doubt, the worst python code you have ever seen
# Upgrade database
from .. import upgrade
upgrade.upgrade_db(sqldb.add_scrobbles)
@ -909,11 +946,19 @@ def start_db():
from . import associated
associated.load_associated_rules()
# import scrobbles
from ..proccontrol.tasks.import_scrobbles import import_scrobbles #lmao this codebase is so fucked
for f in os.listdir(data_dir['import']()):
if f != 'dummy':
import_scrobbles(data_dir['import'](f))
dbstatus['healthy'] = True
conf.AUX_MODE = False # but you have seen it
# inform time module about begin of scrobbling
try:
firstscrobble = sqldb.get_scrobbles()[0]
firstscrobble = sqldb.get_scrobbles(limit=1)[0]
register_scrobbletime(firstscrobble['time'])
except IndexError:
register_scrobbletime(int(datetime.datetime.now().timestamp()))

View File

@ -19,12 +19,16 @@ def load_associated_rules():
# load from file
rawrules = []
for f in os.listdir(data_dir["rules"]()):
if f.split('.')[-1].lower() != 'tsv': continue
filepath = data_dir["rules"](f)
with open(filepath,'r') as filed:
reader = csv.reader(filed,delimiter="\t")
rawrules += [[col for col in entry if col] for entry in reader if len(entry)>0 and not entry[0].startswith('#')]
try:
for f in os.listdir(data_dir["rules"]()):
if f.split('.')[-1].lower() != 'tsv': continue
filepath = data_dir["rules"](f)
with open(filepath,'r') as filed:
reader = csv.reader(filed,delimiter="\t")
rawrules += [[col for col in entry if col] for entry in reader if len(entry)>0 and not entry[0].startswith('#')]
except FileNotFoundError:
return
rules = [{'source_artist':r[1],'target_artist':r[2]} for r in rawrules if r[0]=="countas"]
#for rule in rules:

View File

@ -1,37 +1,57 @@
from bottle import HTTPError
class EntityExists(Exception):
def __init__(self,entitydict):
def __init__(self, entitydict):
self.entitydict = entitydict
class TrackExists(EntityExists):
pass
class ArtistExists(EntityExists):
pass
class AlbumExists(EntityExists):
pass
# if the scrobbles dont match
class DuplicateTimestamp(Exception):
def __init__(self, existing_scrobble, rejected_scrobble):
self.existing_scrobble = existing_scrobble
self.rejected_scrobble = rejected_scrobble
# if it's the same scrobble
class DuplicateScrobble(Exception):
def __init__(self, scrobble):
self.scrobble = scrobble
class DatabaseNotBuilt(HTTPError):
def __init__(self):
super().__init__(
status=503,
body="The Maloja Database is being upgraded to support new Maloja features. This could take a while.",
headers={"Retry-After":120}
headers={"Retry-After": 120}
)
class MissingScrobbleParameters(Exception):
def __init__(self,params=[]):
def __init__(self, params=[]):
self.params = params
class MissingEntityParameter(Exception):
pass
class EntityDoesNotExist(HTTPError):
entitytype = 'Entity'
def __init__(self,entitydict):
self.entitydict = entitydict
super().__init__(
@ -39,9 +59,14 @@ class EntityDoesNotExist(HTTPError):
body=f"The {self.entitytype} '{self.entitydict}' does not exist in the database."
)
class ArtistDoesNotExist(EntityDoesNotExist):
entitytype = 'Artist'
class AlbumDoesNotExist(EntityDoesNotExist):
entitytype = 'Album'
class TrackDoesNotExist(EntityDoesNotExist):
entitytype = 'Track'

View File

@ -1,3 +1,5 @@
from typing import TypedDict, Optional, cast
import sqlalchemy as sql
from sqlalchemy.dialects.sqlite import insert as sqliteinsert
import json
@ -213,6 +215,25 @@ def set_maloja_info(info,dbconn=None):
# The last two fields are not returned under normal circumstances
class AlbumDict(TypedDict):
albumtitle: str
artists: list[str]
class TrackDict(TypedDict):
artists: list[str]
title: str
album: AlbumDict
length: int | None
class ScrobbleDict(TypedDict):
time: int
track: TrackDict
duration: int
origin: str
extra: Optional[dict]
rawscrobble: Optional[dict]
##### Conversions between DB and dicts
@ -222,140 +243,164 @@ def set_maloja_info(info,dbconn=None):
### DB -> DICT
def scrobbles_db_to_dict(rows,include_internal=False,dbconn=None):
tracks = get_tracks_map(set(row.track_id for row in rows),dbconn=dbconn)
def scrobbles_db_to_dict(rows, include_internal=False, dbconn=None) -> list[ScrobbleDict]:
tracks: list[TrackDict] = get_tracks_map(set(row.track_id for row in rows), dbconn=dbconn)
return [
{
cast(ScrobbleDict, {
**{
"time":row.timestamp,
"track":tracks[row.track_id],
"duration":row.duration,
"origin":row.origin,
"time": row.timestamp,
"track": tracks[row.track_id],
"duration": row.duration,
"origin": row.origin
},
**({
"extra":json.loads(row.extra or '{}'),
"rawscrobble":json.loads(row.rawscrobble or '{}')
"extra": json.loads(row.extra or '{}'),
"rawscrobble": json.loads(row.rawscrobble or '{}')
} if include_internal else {})
}
})
for row in rows
]
def scrobble_db_to_dict(row,dbconn=None):
return scrobbles_db_to_dict([row],dbconn=dbconn)[0]
def tracks_db_to_dict(rows,dbconn=None):
artists = get_artists_of_tracks(set(row.id for row in rows),dbconn=dbconn)
albums = get_albums_map(set(row.album_id for row in rows),dbconn=dbconn)
def scrobble_db_to_dict(row, dbconn=None) -> ScrobbleDict:
return scrobbles_db_to_dict([row], dbconn=dbconn)[0]
def tracks_db_to_dict(rows, dbconn=None) -> list[TrackDict]:
artists = get_artists_of_tracks(set(row.id for row in rows), dbconn=dbconn)
albums = get_albums_map(set(row.album_id for row in rows), dbconn=dbconn)
return [
{
cast(TrackDict, {
"artists":artists[row.id],
"title":row.title,
"album":albums.get(row.album_id),
"length":row.length
}
})
for row in rows
]
def track_db_to_dict(row,dbconn=None):
return tracks_db_to_dict([row],dbconn=dbconn)[0]
def artists_db_to_dict(rows,dbconn=None):
def track_db_to_dict(row, dbconn=None) -> TrackDict:
return tracks_db_to_dict([row], dbconn=dbconn)[0]
def artists_db_to_dict(rows, dbconn=None) -> list[str]:
return [
row.name
for row in rows
]
def artist_db_to_dict(row,dbconn=None):
return artists_db_to_dict([row],dbconn=dbconn)[0]
def albums_db_to_dict(rows,dbconn=None):
artists = get_artists_of_albums(set(row.id for row in rows),dbconn=dbconn)
def artist_db_to_dict(row, dbconn=None) -> str:
return artists_db_to_dict([row], dbconn=dbconn)[0]
def albums_db_to_dict(rows, dbconn=None) -> list[AlbumDict]:
artists = get_artists_of_albums(set(row.id for row in rows), dbconn=dbconn)
return [
{
"artists":artists.get(row.id),
"albumtitle":row.albtitle,
}
cast(AlbumDict, {
"artists": artists.get(row.id),
"albumtitle": row.albtitle,
})
for row in rows
]
def album_db_to_dict(row,dbconn=None):
return albums_db_to_dict([row],dbconn=dbconn)[0]
def album_db_to_dict(row, dbconn=None) -> AlbumDict:
return albums_db_to_dict([row], dbconn=dbconn)[0]
### DICT -> DB
# These should return None when no data is in the dict so they can be used for update statements
def scrobble_dict_to_db(info,update_album=False,dbconn=None):
def scrobble_dict_to_db(info: ScrobbleDict, update_album=False, dbconn=None):
return {
"timestamp":info.get('time'),
"origin":info.get('origin'),
"duration":info.get('duration'),
"track_id":get_track_id(info.get('track'),update_album=update_album,dbconn=dbconn),
"extra":json.dumps(info.get('extra')) if info.get('extra') else None,
"rawscrobble":json.dumps(info.get('rawscrobble')) if info.get('rawscrobble') else None
"timestamp": info.get('time'),
"origin": info.get('origin'),
"duration": info.get('duration'),
"track_id": get_track_id(info.get('track'), update_album=update_album, dbconn=dbconn),
"extra": json.dumps(info.get('extra')) if info.get('extra') else None,
"rawscrobble": json.dumps(info.get('rawscrobble')) if info.get('rawscrobble') else None
}
def track_dict_to_db(info,dbconn=None):
def track_dict_to_db(info: TrackDict, dbconn=None):
return {
"title":info.get('title'),
"title_normalized":normalize_name(info.get('title','')) or None,
"length":info.get('length')
"title": info.get('title'),
"title_normalized": normalize_name(info.get('title', '')) or None,
"length": info.get('length')
}
def artist_dict_to_db(info,dbconn=None):
def artist_dict_to_db(info: str, dbconn=None):
return {
"name": info,
"name_normalized":normalize_name(info)
"name_normalized": normalize_name(info)
}
def album_dict_to_db(info,dbconn=None):
def album_dict_to_db(info: AlbumDict, dbconn=None):
return {
"albtitle":info.get('albumtitle'),
"albtitle_normalized":normalize_name(info.get('albumtitle'))
"albtitle": info.get('albumtitle'),
"albtitle_normalized": normalize_name(info.get('albumtitle'))
}
##### Actual Database interactions
# TODO: remove all resolve_id args and do that logic outside the caching to improve hit chances
# TODO: maybe also factor out all intitial get entity funcs (some here, some in __init__) and throw exceptions
@connection_provider
def add_scrobble(scrobbledict,update_album=False,dbconn=None):
add_scrobbles([scrobbledict],update_album=update_album,dbconn=dbconn)
def add_scrobble(scrobbledict: ScrobbleDict, update_album=False, dbconn=None):
_, ex, er = add_scrobbles([scrobbledict], update_album=update_album, dbconn=dbconn)
if er > 0:
raise exc.DuplicateTimestamp(existing_scrobble=None, rejected_scrobble=scrobbledict)
# TODO: actually pass existing scrobble
elif ex > 0:
raise exc.DuplicateScrobble(scrobble=scrobbledict)
@connection_provider
def add_scrobbles(scrobbleslist,update_album=False,dbconn=None):
def add_scrobbles(scrobbleslist: list[ScrobbleDict], update_album=False, dbconn=None) -> tuple[int, int, int]:
with SCROBBLE_LOCK:
ops = [
DB['scrobbles'].insert().values(
**scrobble_dict_to_db(s,update_album=update_album,dbconn=dbconn)
) for s in scrobbleslist
]
# ops = [
# DB['scrobbles'].insert().values(
# **scrobble_dict_to_db(s,update_album=update_album,dbconn=dbconn)
# ) for s in scrobbleslist
# ]
success,errors = 0,0
for op in ops:
success, exists, errors = 0, 0, 0
for s in scrobbleslist:
scrobble_entry = scrobble_dict_to_db(s, update_album=update_album, dbconn=dbconn)
try:
dbconn.execute(op)
dbconn.execute(DB['scrobbles'].insert().values(
**scrobble_entry
))
success += 1
except sql.exc.IntegrityError as e:
errors += 1
except sql.exc.IntegrityError:
# get existing scrobble
result = dbconn.execute(DB['scrobbles'].select().where(
DB['scrobbles'].c.timestamp == scrobble_entry['timestamp']
)).first()
if result.track_id == scrobble_entry['track_id']:
exists += 1
else:
errors += 1
# TODO check if actual duplicate
if errors > 0: log(f"{errors} Scrobbles have not been written to database (duplicate timestamps)!", color='red')
if exists > 0: log(f"{exists} Scrobbles have not been written to database (already exist)", color='orange')
return success, exists, errors
if errors > 0: log(f"{errors} Scrobbles have not been written to database!",color='red')
return success,errors
@connection_provider
def delete_scrobble(scrobble_id,dbconn=None):
def delete_scrobble(scrobble_id: int, dbconn=None) -> bool:
with SCROBBLE_LOCK:
@ -369,7 +414,7 @@ def delete_scrobble(scrobble_id,dbconn=None):
@connection_provider
def add_track_to_album(track_id,album_id,replace=False,dbconn=None):
def add_track_to_album(track_id: int, album_id: int, replace=False, dbconn=None) -> bool:
conditions = [
DB['tracks'].c.id == track_id
@ -398,39 +443,39 @@ def add_track_to_album(track_id,album_id,replace=False,dbconn=None):
# ALL OF RECORDED HISTORY in order to display top weeks
# lmao
# TODO: figure out something better
return True
@connection_provider
def add_tracks_to_albums(track_to_album_id_dict,replace=False,dbconn=None):
def add_tracks_to_albums(track_to_album_id_dict: dict[int, int], replace=False, dbconn=None) -> bool:
for track_id in track_to_album_id_dict:
add_track_to_album(track_id,track_to_album_id_dict[track_id],dbconn=dbconn)
add_track_to_album(track_id,track_to_album_id_dict[track_id], replace=replace, dbconn=dbconn)
return True
@connection_provider
def remove_album(*track_ids,dbconn=None):
def remove_album(*track_ids: list[int], dbconn=None) -> bool:
DB['tracks'].update().where(
DB['tracks'].c.track_id.in_(track_ids)
).values(
album_id=None
)
return True
### these will 'get' the ID of an entity, creating it if necessary
@cached_wrapper
@connection_provider
def get_track_id(trackdict,create_new=True,update_album=False,dbconn=None):
def get_track_id(trackdict: TrackDict, create_new=True, update_album=False, dbconn=None) -> int | None:
ntitle = normalize_name(trackdict['title'])
artist_ids = [get_artist_id(a,create_new=create_new,dbconn=dbconn) for a in trackdict['artists']]
artist_ids = [get_artist_id(a, create_new=create_new, dbconn=dbconn) for a in trackdict['artists']]
artist_ids = list(set(artist_ids))
op = DB['tracks'].select().where(
DB['tracks'].c.title_normalized==ntitle
DB['tracks'].c.title_normalized == ntitle
)
result = dbconn.execute(op).all()
for row in result:
@ -440,7 +485,7 @@ def get_track_id(trackdict,create_new=True,update_album=False,dbconn=None):
op = DB['trackartists'].select(
# DB['trackartists'].c.artist_id
).where(
DB['trackartists'].c.track_id==row.id
DB['trackartists'].c.track_id == row.id
)
result = dbconn.execute(op).all()
match_artist_ids = [r.artist_id for r in result]
@ -456,14 +501,14 @@ def get_track_id(trackdict,create_new=True,update_album=False,dbconn=None):
album_id = get_album_id(trackdict['album'],create_new=(update_album or not row.album_id),dbconn=dbconn)
add_track_to_album(row.id,album_id,replace=update_album,dbconn=dbconn)
return row.id
if not create_new: return None
if not create_new:
return None
#print("Creating new track")
op = DB['tracks'].insert().values(
**track_dict_to_db(trackdict,dbconn=dbconn)
**track_dict_to_db(trackdict, dbconn=dbconn)
)
result = dbconn.execute(op)
track_id = result.inserted_primary_key[0]
@ -478,24 +523,26 @@ def get_track_id(trackdict,create_new=True,update_album=False,dbconn=None):
#print("Created",trackdict['title'],track_id)
if trackdict.get('album'):
add_track_to_album(track_id,get_album_id(trackdict['album'],dbconn=dbconn),dbconn=dbconn)
add_track_to_album(track_id, get_album_id(trackdict['album'], dbconn=dbconn), dbconn=dbconn)
return track_id
@cached_wrapper
@connection_provider
def get_artist_id(artistname,create_new=True,dbconn=None):
def get_artist_id(artistname: str, create_new=True, dbconn=None) -> int | None:
nname = normalize_name(artistname)
#print("looking for",nname)
op = DB['artists'].select().where(
DB['artists'].c.name_normalized==nname
DB['artists'].c.name_normalized == nname
)
result = dbconn.execute(op).all()
for row in result:
#print("ID for",artistname,"was",row[0])
return row.id
if not create_new: return None
if not create_new:
return None
op = DB['artists'].insert().values(
name=artistname,
@ -508,15 +555,15 @@ def get_artist_id(artistname,create_new=True,dbconn=None):
@cached_wrapper
@connection_provider
def get_album_id(albumdict,create_new=True,ignore_albumartists=False,dbconn=None):
def get_album_id(albumdict: AlbumDict, create_new=True, ignore_albumartists=False, dbconn=None) -> int | None:
ntitle = normalize_name(albumdict['albumtitle'])
artist_ids = [get_artist_id(a,dbconn=dbconn) for a in (albumdict.get('artists') or [])]
artist_ids = [get_artist_id(a, dbconn=dbconn) for a in (albumdict.get('artists') or [])]
artist_ids = list(set(artist_ids))
op = DB['albums'].select(
# DB['albums'].c.id
).where(
DB['albums'].c.albtitle_normalized==ntitle
DB['albums'].c.albtitle_normalized == ntitle
)
result = dbconn.execute(op).all()
for row in result:
@ -529,7 +576,7 @@ def get_album_id(albumdict,create_new=True,ignore_albumartists=False,dbconn=None
op = DB['albumartists'].select(
# DB['albumartists'].c.artist_id
).where(
DB['albumartists'].c.album_id==row.id
DB['albumartists'].c.album_id == row.id
)
result = dbconn.execute(op).all()
match_artist_ids = [r.artist_id for r in result]
@ -538,11 +585,11 @@ def get_album_id(albumdict,create_new=True,ignore_albumartists=False,dbconn=None
#print("ID for",albumdict['title'],"was",row[0])
return row.id
if not create_new: return None
if not create_new:
return None
op = DB['albums'].insert().values(
**album_dict_to_db(albumdict,dbconn=dbconn)
**album_dict_to_db(albumdict, dbconn=dbconn)
)
result = dbconn.execute(op)
album_id = result.inserted_primary_key[0]
@ -557,18 +604,15 @@ def get_album_id(albumdict,create_new=True,ignore_albumartists=False,dbconn=None
return album_id
### Edit existing
@connection_provider
def edit_scrobble(scrobble_id,scrobbleupdatedict,dbconn=None):
def edit_scrobble(scrobble_id: int, scrobbleupdatedict: dict, dbconn=None) -> bool:
dbentry = scrobble_dict_to_db(scrobbleupdatedict,dbconn=dbconn)
dbentry = {k:v for k,v in dbentry.items() if v}
dbentry = {k: v for k, v in dbentry.items() if v}
print("Updating scrobble",dbentry)
print("Updating scrobble", dbentry)
with SCROBBLE_LOCK:
@ -579,97 +623,97 @@ def edit_scrobble(scrobble_id,scrobbleupdatedict,dbconn=None):
)
dbconn.execute(op)
return True
# edit function only for primary db information (not linked fields)
@connection_provider
def edit_artist(id,artistupdatedict,dbconn=None):
def edit_artist(artist_id: int, artistupdatedict: str, dbconn=None) -> bool:
artist = get_artist(id)
artist = get_artist(artist_id)
changedartist = artistupdatedict # well
dbentry = artist_dict_to_db(artistupdatedict,dbconn=dbconn)
dbentry = {k:v for k,v in dbentry.items() if v}
dbentry = artist_dict_to_db(artistupdatedict, dbconn=dbconn)
dbentry = {k: v for k, v in dbentry.items() if v}
existing_artist_id = get_artist_id(changedartist,create_new=False,dbconn=dbconn)
if existing_artist_id not in (None,id):
existing_artist_id = get_artist_id(changedartist, create_new=False, dbconn=dbconn)
if existing_artist_id not in (None, artist_id):
raise exc.ArtistExists(changedartist)
op = DB['artists'].update().where(
DB['artists'].c.id==id
DB['artists'].c.id == artist_id
).values(
**dbentry
)
result = dbconn.execute(op)
return True
# edit function only for primary db information (not linked fields)
@connection_provider
def edit_track(id,trackupdatedict,dbconn=None):
def edit_track(track_id: int, trackupdatedict: dict, dbconn=None) -> bool:
track = get_track(id,dbconn=dbconn)
changedtrack = {**track,**trackupdatedict}
track = get_track(track_id, dbconn=dbconn)
changedtrack: TrackDict = {**track, **trackupdatedict}
dbentry = track_dict_to_db(trackupdatedict,dbconn=dbconn)
dbentry = {k:v for k,v in dbentry.items() if v}
dbentry = track_dict_to_db(trackupdatedict, dbconn=dbconn)
dbentry = {k: v for k, v in dbentry.items() if v}
existing_track_id = get_track_id(changedtrack,create_new=False,dbconn=dbconn)
if existing_track_id not in (None,id):
existing_track_id = get_track_id(changedtrack, create_new=False, dbconn=dbconn)
if existing_track_id not in (None, track_id):
raise exc.TrackExists(changedtrack)
op = DB['tracks'].update().where(
DB['tracks'].c.id==id
DB['tracks'].c.id == track_id
).values(
**dbentry
)
result = dbconn.execute(op)
return True
# edit function only for primary db information (not linked fields)
@connection_provider
def edit_album(id,albumupdatedict,dbconn=None):
def edit_album(album_id: int, albumupdatedict: dict, dbconn=None) -> bool:
album = get_album(id,dbconn=dbconn)
changedalbum = {**album,**albumupdatedict}
album = get_album(album_id, dbconn=dbconn)
changedalbum: AlbumDict = {**album, **albumupdatedict}
dbentry = album_dict_to_db(albumupdatedict,dbconn=dbconn)
dbentry = {k:v for k,v in dbentry.items() if v}
dbentry = album_dict_to_db(albumupdatedict, dbconn=dbconn)
dbentry = {k: v for k, v in dbentry.items() if v}
existing_album_id = get_album_id(changedalbum,create_new=False,dbconn=dbconn)
if existing_album_id not in (None,id):
existing_album_id = get_album_id(changedalbum, create_new=False, dbconn=dbconn)
if existing_album_id not in (None, album_id):
raise exc.TrackExists(changedalbum)
op = DB['albums'].update().where(
DB['albums'].c.id==id
DB['albums'].c.id == album_id
).values(
**dbentry
)
result = dbconn.execute(op)
return True
### Edit associations
@connection_provider
def add_artists_to_tracks(track_ids,artist_ids,dbconn=None):
def add_artists_to_tracks(track_ids: list[int], artist_ids: list[int], dbconn=None) -> bool:
op = DB['trackartists'].insert().values([
{'track_id':track_id,'artist_id':artist_id}
{'track_id': track_id, 'artist_id': artist_id}
for track_id in track_ids for artist_id in artist_ids
])
result = dbconn.execute(op)
# the resulting tracks could now be duplicates of existing ones
# this also takes care of clean_db
merge_duplicate_tracks(dbconn=dbconn)
return True
@connection_provider
def remove_artists_from_tracks(track_ids,artist_ids,dbconn=None):
def remove_artists_from_tracks(track_ids: list[int], artist_ids: list[int], dbconn=None) -> bool:
# only tracks that have at least one other artist
subquery = DB['trackartists'].select().where(
@ -687,16 +731,14 @@ def remove_artists_from_tracks(track_ids,artist_ids,dbconn=None):
)
result = dbconn.execute(op)
# the resulting tracks could now be duplicates of existing ones
# this also takes care of clean_db
merge_duplicate_tracks(dbconn=dbconn)
return True
@connection_provider
def add_artists_to_albums(album_ids,artist_ids,dbconn=None):
def add_artists_to_albums(album_ids: list[int], artist_ids: list[int], dbconn=None) -> bool:
op = DB['albumartists'].insert().values([
{'album_id':album_id,'artist_id':artist_id}
@ -704,16 +746,14 @@ def add_artists_to_albums(album_ids,artist_ids,dbconn=None):
])
result = dbconn.execute(op)
# the resulting albums could now be duplicates of existing ones
# this also takes care of clean_db
merge_duplicate_albums(dbconn=dbconn)
return True
@connection_provider
def remove_artists_from_albums(album_ids,artist_ids,dbconn=None):
def remove_artists_from_albums(album_ids: list[int], artist_ids: list[int], dbconn=None) -> bool:
# no check here, albums are allowed to have zero artists
@ -725,17 +765,16 @@ def remove_artists_from_albums(album_ids,artist_ids,dbconn=None):
)
result = dbconn.execute(op)
# the resulting albums could now be duplicates of existing ones
# this also takes care of clean_db
merge_duplicate_albums(dbconn=dbconn)
return True
### Merge
@connection_provider
def merge_tracks(target_id,source_ids,dbconn=None):
def merge_tracks(target_id: int, source_ids: list[int], dbconn=None) -> bool:
op = DB['scrobbles'].update().where(
DB['scrobbles'].c.track_id.in_(source_ids)
@ -744,11 +783,11 @@ def merge_tracks(target_id,source_ids,dbconn=None):
)
result = dbconn.execute(op)
clean_db(dbconn=dbconn)
return True
@connection_provider
def merge_artists(target_id,source_ids,dbconn=None):
def merge_artists(target_id: int, source_ids: list[int], dbconn=None) -> bool:
# some tracks could already have multiple of the to be merged artists
@ -776,7 +815,6 @@ def merge_artists(target_id,source_ids,dbconn=None):
result = dbconn.execute(op)
# same for albums
op = DB['albumartists'].select().where(
DB['albumartists'].c.artist_id.in_(source_ids + [target_id])
@ -797,7 +835,6 @@ def merge_artists(target_id,source_ids,dbconn=None):
result = dbconn.execute(op)
# tracks_artists = {}
# for row in result:
# tracks_artists.setdefault(row.track_id,[]).append(row.artist_id)
@ -814,15 +851,14 @@ def merge_artists(target_id,source_ids,dbconn=None):
# result = dbconn.execute(op)
# this could have created duplicate tracks and albums
merge_duplicate_tracks(artist_id=target_id,dbconn=dbconn)
merge_duplicate_albums(artist_id=target_id,dbconn=dbconn)
merge_duplicate_tracks(artist_id=target_id, dbconn=dbconn)
merge_duplicate_albums(artist_id=target_id, dbconn=dbconn)
clean_db(dbconn=dbconn)
return True
@connection_provider
def merge_albums(target_id,source_ids,dbconn=None):
def merge_albums(target_id: int, source_ids: list[int], dbconn=None) -> bool:
op = DB['tracks'].update().where(
DB['tracks'].c.album_id.in_(source_ids)
@ -831,7 +867,6 @@ def merge_albums(target_id,source_ids,dbconn=None):
)
result = dbconn.execute(op)
clean_db(dbconn=dbconn)
return True
@ -860,19 +895,24 @@ def get_scrobbles_of_artist(artist,since=None,to=None,resolve_references=True,li
op = op.order_by(sql.desc('timestamp'))
else:
op = op.order_by(sql.asc('timestamp'))
if limit:
if limit and not associated:
# if we count associated we cant limit here because we remove stuff later!
op = op.limit(limit)
result = dbconn.execute(op).all()
# remove duplicates (multiple associated artists in the song, e.g. Irene & Seulgi being both counted as Red Velvet)
# distinct on doesn't seem to exist in sqlite
seen = set()
filtered_result = []
for row in result:
if row.timestamp not in seen:
filtered_result.append(row)
seen.add(row.timestamp)
result = filtered_result
if associated:
seen = set()
filtered_result = []
for row in result:
if row.timestamp not in seen:
filtered_result.append(row)
seen.add(row.timestamp)
result = filtered_result
if limit:
result = result[:limit]
if resolve_references:
@ -962,7 +1002,6 @@ def get_scrobbles(since=None,to=None,resolve_references=True,limit=None,reverse=
result = scrobbles_db_to_dict(result,dbconn=dbconn)
#result = [scrobble_db_to_dict(row,resolve_references=resolve_references) for i,row in enumerate(result) if i<max]
return result
@ -1072,7 +1111,7 @@ def count_scrobbles_by_artist(since,to,associated=True,resolve_ids=True,dbconn=N
DB['scrobbles'].c.timestamp.between(since,to)
).group_by(
artistselect
).order_by(sql.desc('count'))
).order_by(sql.desc('count'),sql.desc('really_by_this_artist'))
result = dbconn.execute(op).all()
if resolve_ids:
@ -1601,48 +1640,52 @@ def get_credited_artists(*artists,dbconn=None):
@cached_wrapper
@connection_provider
def get_track(id,dbconn=None):
def get_track(track_id: int, dbconn=None) -> TrackDict:
op = DB['tracks'].select().where(
DB['tracks'].c.id==id
DB['tracks'].c.id == track_id
)
result = dbconn.execute(op).all()
trackinfo = result[0]
return track_db_to_dict(trackinfo,dbconn=dbconn)
return track_db_to_dict(trackinfo, dbconn=dbconn)
@cached_wrapper
@connection_provider
def get_artist(id,dbconn=None):
def get_artist(artist_id: int, dbconn=None) -> str:
op = DB['artists'].select().where(
DB['artists'].c.id==id
DB['artists'].c.id == artist_id
)
result = dbconn.execute(op).all()
artistinfo = result[0]
return artist_db_to_dict(artistinfo,dbconn=dbconn)
return artist_db_to_dict(artistinfo, dbconn=dbconn)
@cached_wrapper
@connection_provider
def get_album(id,dbconn=None):
def get_album(album_id: int, dbconn=None) -> AlbumDict:
op = DB['albums'].select().where(
DB['albums'].c.id==id
DB['albums'].c.id == album_id
)
result = dbconn.execute(op).all()
albuminfo = result[0]
return album_db_to_dict(albuminfo,dbconn=dbconn)
return album_db_to_dict(albuminfo, dbconn=dbconn)
@cached_wrapper
@connection_provider
def get_scrobble(timestamp, include_internal=False, dbconn=None):
def get_scrobble(timestamp: int, include_internal=False, dbconn=None) -> ScrobbleDict:
op = DB['scrobbles'].select().where(
DB['scrobbles'].c.timestamp==timestamp
DB['scrobbles'].c.timestamp == timestamp
)
result = dbconn.execute(op).all()
scrobble = result[0]
return scrobbles_db_to_dict(rows=[scrobble], include_internal=include_internal)[0]
@cached_wrapper
@connection_provider
def search_artist(searchterm,dbconn=None):
@ -1684,6 +1727,11 @@ def clean_db(dbconn=None):
log(f"Database Cleanup...")
to_delete = [
# NULL associations
"from albumartists where album_id is NULL",
"from albumartists where artist_id is NULL",
"from trackartists where track_id is NULL",
"from trackartists where artist_id is NULL",
# tracks with no scrobbles (trackartist entries first)
"from trackartists where track_id in (select id from tracks where id not in (select track_id from scrobbles))",
"from tracks where id not in (select track_id from scrobbles)",

View File

@ -1,9 +1,9 @@
import os
import cProfile, pstats
import time
from doreah.logging import log
from doreah.timing import Clock
from ..pkg_global.conf import data_dir
@ -27,8 +27,7 @@ def profile(func):
def newfunc(*args,**kwargs):
clock = Clock()
clock.start()
starttime = time.time()
if FULL_PROFILE:
benchmarkfolder = data_dir['logs']("benchmarks")
@ -44,7 +43,7 @@ def profile(func):
if FULL_PROFILE:
localprofiler.disable()
seconds = clock.stop()
seconds = time.time() - starttime
if not SINGLE_CALLS:
times.setdefault(realfunc,[]).append(seconds)

View File

@ -284,6 +284,12 @@ def image_request(artist_id=None,track_id=None,album_id=None):
if result is not None:
# we got an entry, even if it's that there is no image (value None)
if result['value'] is None:
# fallback to album regardless of setting (because we have no image)
if track_id:
track = database.sqldb.get_track(track_id)
if track.get("album"):
album_id = database.sqldb.get_album_id(track["album"])
return image_request(album_id=album_id)
# use placeholder
if malojaconfig["FANCY_PLACEHOLDER_ART"]:
placeholder_url = "https://generative-placeholders.glitch.me/image?width=300&height=300&style="

View File

@ -1,16 +1,18 @@
from datetime import timezone, timedelta, date, time, datetime
from calendar import monthrange
import math
import zoneinfo
from abc import ABC, abstractmethod
from .pkg_global.conf import malojaconfig
OFFSET = malojaconfig["TIMEZONE"]
TIMEZONE = timezone(timedelta(hours=OFFSET))
LOCATION_TIMEZONE = malojaconfig["LOCATION_TIMEZONE"]
TIMEZONE = timezone(timedelta(hours=OFFSET)) if not LOCATION_TIMEZONE or LOCATION_TIMEZONE not in zoneinfo.available_timezones() else zoneinfo.ZoneInfo(LOCATION_TIMEZONE)
UTC = timezone.utc
FIRST_SCROBBLE = int(datetime.utcnow().replace(tzinfo=UTC).timestamp())
FIRST_SCROBBLE = int(datetime.now(UTC).timestamp())
def register_scrobbletime(timestamp):
global FIRST_SCROBBLE
@ -63,7 +65,7 @@ class MTRangeGeneric(ABC):
# whether we currently live or will ever again live in this range
def active(self):
return (self.last_stamp() > datetime.utcnow().timestamp())
return (self.last_stamp() > datetime.now(timezone.utc).timestamp())
def __contains__(self,timestamp):
return timestamp >= self.first_stamp() and timestamp <= self.last_stamp()
@ -111,7 +113,7 @@ class MTRangeGregorian(MTRangeSingular):
# whether we currently live or will ever again live in this range
# USE GENERIC SUPER METHOD INSTEAD
# def active(self):
# tod = datetime.datetime.utcnow().date()
# tod = datetime.datetime.now(timezone.utc).date()
# if tod.year > self.year: return False
# if self.precision == 1: return True
# if tod.year == self.year:
@ -328,7 +330,7 @@ class MTRangeComposite(MTRangeGeneric):
if self.since is None: return FIRST_SCROBBLE
else: return self.since.first_stamp()
def last_stamp(self):
#if self.to is None: return int(datetime.utcnow().replace(tzinfo=timezone.utc).timestamp())
#if self.to is None: return int(datetime.now(timezone.utc).timestamp())
if self.to is None: return today().last_stamp()
else: return self.to.last_stamp()
@ -421,8 +423,8 @@ def get_last_instance(category,current,target,amount):
str_to_time_range = {
**{s:callable for callable,strlist in currenttime_string_representations for s in strlist},
**{s:(lambda i=index:get_last_instance(thismonth,datetime.utcnow().month,i,12)) for index,strlist in enumerate(month_string_representations,1) for s in strlist},
**{s:(lambda i=index:get_last_instance(today,datetime.utcnow().isoweekday()+1%7,i,7)) for index,strlist in enumerate(weekday_string_representations,1) for s in strlist}
**{s:(lambda i=index:get_last_instance(thismonth,datetime.now(timezone.utc).month,i,12)) for index,strlist in enumerate(month_string_representations,1) for s in strlist},
**{s:(lambda i=index:get_last_instance(today,datetime.now(timezone.utc).isoweekday()+1%7,i,7)) for index,strlist in enumerate(weekday_string_representations,1) for s in strlist}
}

View File

@ -29,6 +29,8 @@ def uri_to_internal(keys,accepted_entities=('artist','track','album'),forceTrack
# 1
filterkeys = {}
# this only takes care of the logic - what kind of entity we're dealing with
# it does not check with the database if it exists or what the canonical name is!!!
if "track" in accepted_entities and "title" in keys:
filterkeys.update({"track":{"artists":keys.getall("trackartist"),"title":keys.get("title")}})
if "artist" in accepted_entities and "artist" in keys:

View File

@ -1,4 +1,7 @@
import os
import doreah.auth
import doreah.logging
from doreah.configuration import Configuration
from doreah.configuration import types as tp
@ -29,9 +32,7 @@ pthj = os.path.join
def is_dir_usable(pth):
try:
os.makedirs(pth,exist_ok=True)
os.mknod(pthj(pth,".test"))
os.remove(pthj(pth,".test"))
return True
return os.access(pth,os.W_OK)
except Exception:
return False
@ -179,7 +180,7 @@ malojaconfig = Configuration(
"name":(tp.String(), "Name", "Generic Maloja User")
},
"Third Party Services":{
"metadata_providers":(tp.List(tp.String()), "Metadata Providers", ['lastfm','spotify','deezer','audiodb','musicbrainz'], "Which metadata providers should be used in what order. Musicbrainz is rate-limited and should not be used first."),
"metadata_providers":(tp.List(tp.String()), "Metadata Providers", ['lastfm','spotify','deezer','audiodb','musicbrainz'], "List of which metadata providers should be used in what order. Musicbrainz is rate-limited and should not be used first."),
"scrobble_lastfm":(tp.Boolean(), "Proxy-Scrobble to Last.fm", False),
"lastfm_api_key":(tp.String(), "Last.fm API Key", None),
"lastfm_api_secret":(tp.String(), "Last.fm API Secret", None),
@ -206,7 +207,8 @@ malojaconfig = Configuration(
"filters_remix":(tp.Set(tp.String()), "Remix Filters", ["Remix", "Remix Edit", "Short Mix", "Extended Mix", "Soundtrack Version"], "Filters used to recognize the remix artists in the title"),
"parse_remix_artists":(tp.Boolean(), "Parse Remix Artists", False),
"week_offset":(tp.Integer(), "Week Begin Offset", 0, "Start of the week for the purpose of weekly statistics. 0 = Sunday, 6 = Saturday"),
"timezone":(tp.Integer(), "UTC Offset", 0)
"timezone":(tp.Integer(), "UTC Offset", 0),
"location_timezone":(tp.String(), "Location Timezone", None)
},
"Web Interface":{
"default_range_startpage":(tp.Choice({'alltime':'All Time','year':'Year','month':"Month",'week':'Week'}), "Default Range for Startpage Stats", "year"),
@ -297,6 +299,7 @@ data_directories = {
"auth":pthj(dir_settings['state'],"auth"),
"backups":pthj(dir_settings['state'],"backups"),
"images":pthj(dir_settings['state'],"images"),
"import":pthj(dir_settings['state'],"import"),
"scrobbles":pthj(dir_settings['state']),
"rules":pthj(dir_settings['config'],"rules"),
"clients":pthj(dir_settings['config']),
@ -310,6 +313,12 @@ data_directories = {
}
for identifier,path in data_directories.items():
if path is None:
continue
if malojaconfig.readonly and (path == dir_settings['config'] or path.startswith(dir_settings['config']+'/')):
continue
try:
os.makedirs(path,exist_ok=True)
if not is_dir_usable(path): raise PermissionError(f"Directory {path} is not usable!")
@ -320,41 +329,35 @@ for identifier,path in data_directories.items():
print("Cannot use",path,"for cache, finding new folder...")
data_directories['cache'] = dir_settings['cache'] = malojaconfig['DIRECTORY_CACHE'] = find_good_folder('cache')
else:
print("Directory",path,"is not usable.")
print(f"Directory for {identifier} ({path}) is not writeable.")
print("Please change permissions or settings!")
print("Make sure Maloja has write and execute access to this directory.")
raise
class DataDirs:
def __init__(self, dirs):
self.dirs = dirs
data_dir = {
k:lambda *x,k=k: pthj(data_directories[k],*x) for k in data_directories
}
def __getitem__(self, key):
return lambda *x, k=key: pthj(self.dirs[k], *x)
data_dir = DataDirs(data_directories)
### DOREAH OBJECTS
auth = doreah.auth.AuthManager(singleuser=True,cookieprefix='maloja',stylesheets=("/maloja.css",),dbfile=data_dir['auth']("auth.sqlite"))
#logger = doreah.logging.Logger(logfolder=data_dir['logs']() if malojaconfig["LOGGING"] else None)
#log = logger.log
# this is not how its supposed to be done, but lets ease the transition
doreah.logging.defaultlogger.logfolder = data_dir['logs']() if malojaconfig["LOGGING"] else None
### DOREAH CONFIGURATION
from doreah import config
config(
auth={
"multiuser":False,
"cookieprefix":"maloja",
"stylesheets":["/maloja.css"],
"dbfile":data_dir['auth']("auth.ddb")
},
logging={
"logfolder": data_dir['logs']() if malojaconfig["LOGGING"] else None
},
regular={
"offset": malojaconfig["TIMEZONE"]
}
)
custom_css_files = [f for f in os.listdir(data_dir['css']()) if f.lower().endswith('.css')]
try:
custom_css_files = [f for f in os.listdir(data_dir['css']()) if f.lower().endswith('.css')]
except FileNotFoundError:
custom_css_files = []
from ..database.sqldb import set_maloja_info
set_maloja_info({'last_run_version':VERSION})

View File

@ -12,11 +12,12 @@ def export(targetfolder=None):
targetfolder = os.getcwd()
timestr = time.strftime("%Y_%m_%d_%H_%M_%S")
timestamp = int(time.time()) # ok this is technically a separate time get from above, but those ms are not gonna matter, and im too lazy to change it all to datetime
filename = f"maloja_export_{timestr}.json"
outputfile = os.path.join(targetfolder,filename)
assert not os.path.exists(outputfile)
data = {'scrobbles':get_scrobbles()}
data = {'maloja':{'export_time': timestamp },'scrobbles':get_scrobbles()}
with open(outputfile,'w') as outfd:
json.dump(data,outfd,indent=3)

View File

@ -32,43 +32,62 @@ def import_scrobbles(inputf):
}
filename = os.path.basename(inputf)
importfunc = None
if re.match(r".*\.csv",filename):
typeid,typedesc = "lastfm","Last.fm"
if re.match(r"recenttracks-.*\.csv", filename):
typeid, typedesc = "lastfm", "Last.fm (ghan CSV)"
importfunc = parse_lastfm_ghan_csv
elif re.match(r".*\.csv", filename):
typeid,typedesc = "lastfm", "Last.fm (benjaminbenben CSV)"
importfunc = parse_lastfm
elif re.match(r"Streaming_History_Audio.+\.json",filename):
typeid,typedesc = "spotify","Spotify"
elif re.match(r"Streaming_History_Audio.+\.json", filename):
typeid,typedesc = "spotify", "Spotify"
importfunc = parse_spotify_lite
elif re.match(r"endsong_[0-9]+\.json",filename):
typeid,typedesc = "spotify","Spotify"
elif re.match(r"endsong_[0-9]+\.json", filename):
typeid,typedesc = "spotify", "Spotify"
importfunc = parse_spotify
elif re.match(r"StreamingHistory[0-9]+\.json",filename):
typeid,typedesc = "spotify","Spotify"
elif re.match(r"StreamingHistory[0-9]+\.json", filename):
typeid,typedesc = "spotify", "Spotify"
importfunc = parse_spotify_lite_legacy
elif re.match(r"maloja_export[_0-9]*\.json",filename):
typeid,typedesc = "maloja","Maloja"
elif re.match(r"maloja_export[_0-9]*\.json", filename):
typeid,typedesc = "maloja", "Maloja"
importfunc = parse_maloja
# username_lb-YYYY-MM-DD.json
elif re.match(r".*_lb-[0-9-]+\.json",filename):
typeid,typedesc = "listenbrainz","ListenBrainz"
elif re.match(r".*_lb-[0-9-]+\.json", filename):
typeid,typedesc = "listenbrainz", "ListenBrainz"
importfunc = parse_listenbrainz
elif re.match(r"\.scrobbler\.log",filename):
typeid,typedesc = "rockbox","Rockbox"
elif re.match(r"\.scrobbler\.log", filename):
typeid,typedesc = "rockbox", "Rockbox"
importfunc = parse_rockbox
else:
elif re.match(r"recenttracks-.*\.json", filename):
typeid, typedesc = "lastfm", "Last.fm (ghan JSON)"
importfunc = parse_lastfm_ghan_json
elif re.match(r".*\.json",filename):
try:
with open(filename,'r') as fd:
data = json.load(fd)
if 'maloja' in data:
typeid,typedesc = "maloja","Maloja"
importfunc = parse_maloja
except Exception:
pass
if not importfunc:
print("File",inputf,"could not be identified as a valid import source.")
return result
print(f"Parsing {col['yellow'](inputf)} as {col['cyan'](typedesc)} export")
print("This could take a while...")
print(f"Parsing {col['yellow'](inputf)} as {col['cyan'](typedesc)} export.")
print(col['red']("Please double-check if this is correct - if the import fails, the file might have been interpreted as the wrong type."))
timestamps = set()
scrobblebuffer = []
@ -131,27 +150,29 @@ def import_scrobbles(inputf):
return result
def parse_spotify_lite_legacy(inputf):
pth = os.path
# use absolute paths internally for peace of mind. just change representation for console output
inputf = pth.abspath(inputf)
inputfolder = pth.dirname(inputf)
filenames = re.compile(r'StreamingHistory[0-9]+\.json')
inputfiles = [os.path.join(inputfolder,f) for f in os.listdir(inputfolder) if filenames.match(f)]
#inputfiles = [os.path.join(inputfolder,f) for f in os.listdir(inputfolder) if filenames.match(f)]
inputfiles = [inputf]
if len(inputfiles) == 0:
print("No files found!")
return
#if len(inputfiles) == 0:
# print("No files found!")
# return
if inputfiles != [inputf]:
print("Spotify files should all be imported together to identify duplicates across the whole dataset.")
if not ask("Import " + ", ".join(col['yellow'](pth.basename(i)) for i in inputfiles) + "?",default=True):
inputfiles = [inputf]
print("Only importing", col['yellow'](pth.basename(inputf)))
#if inputfiles != [inputf]:
# print("Spotify files should all be imported together to identify duplicates across the whole dataset.")
# if not ask("Import " + ", ".join(col['yellow'](pth.basename(i)) for i in inputfiles) + "?",default=True):
# inputfiles = [inputf]
# print("Only importing", col['yellow'](pth.basename(inputf)))
for inputf in inputfiles:
print("Importing",col['yellow'](inputf),"...")
#print("Importing",col['yellow'](inputf),"...")
with open(inputf,'r') as inputfd:
data = json.load(inputfd)
@ -190,21 +211,22 @@ def parse_spotify_lite(inputf):
inputf = pth.abspath(inputf)
inputfolder = pth.dirname(inputf)
filenames = re.compile(r'Streaming_History_Audio.+\.json')
inputfiles = [os.path.join(inputfolder,f) for f in os.listdir(inputfolder) if filenames.match(f)]
#inputfiles = [os.path.join(inputfolder,f) for f in os.listdir(inputfolder) if filenames.match(f)]
inputfiles = [inputf]
if len(inputfiles) == 0:
print("No files found!")
return
#if len(inputfiles) == 0:
# print("No files found!")
# return
if inputfiles != [inputf]:
print("Spotify files should all be imported together to identify duplicates across the whole dataset.")
if not ask("Import " + ", ".join(col['yellow'](pth.basename(i)) for i in inputfiles) + "?",default=True):
inputfiles = [inputf]
print("Only importing", col['yellow'](pth.basename(inputf)))
#if inputfiles != [inputf]:
# print("Spotify files should all be imported together to identify duplicates across the whole dataset.")
# if not ask("Import " + ", ".join(col['yellow'](pth.basename(i)) for i in inputfiles) + "?",default=True):
# inputfiles = [inputf]
# print("Only importing", col['yellow'](pth.basename(inputf)))
for inputf in inputfiles:
print("Importing",col['yellow'](inputf),"...")
#print("Importing",col['yellow'](inputf),"...")
with open(inputf,'r') as inputfd:
data = json.load(inputfd)
@ -243,23 +265,25 @@ def parse_spotify_lite(inputf):
print()
def parse_spotify(inputf):
pth = os.path
# use absolute paths internally for peace of mind. just change representation for console output
inputf = pth.abspath(inputf)
inputfolder = pth.dirname(inputf)
filenames = re.compile(r'endsong_[0-9]+\.json')
inputfiles = [os.path.join(inputfolder,f) for f in os.listdir(inputfolder) if filenames.match(f)]
#inputfiles = [os.path.join(inputfolder,f) for f in os.listdir(inputfolder) if filenames.match(f)]
inputfiles = [inputf]
if len(inputfiles) == 0:
print("No files found!")
return
#if len(inputfiles) == 0:
# print("No files found!")
# return
if inputfiles != [inputf]:
print("Spotify files should all be imported together to identify duplicates across the whole dataset.")
if not ask("Import " + ", ".join(col['yellow'](pth.basename(i)) for i in inputfiles) + "?",default=True):
inputfiles = [inputf]
print("Only importing", col['yellow'](pth.basename(inputf)))
#if inputfiles != [inputf]:
# print("Spotify files should all be imported together to identify duplicates across the whole dataset.")
# if not ask("Import " + ", ".join(col['yellow'](pth.basename(i)) for i in inputfiles) + "?",default=True):
# inputfiles = [inputf]
# print("Only importing", col['yellow'](pth.basename(inputf)))
# we keep timestamps here as well to remove duplicates because spotify's export
# is messy - this is specific to this import type and should not be mixed with
@ -270,7 +294,7 @@ def parse_spotify(inputf):
for inputf in inputfiles:
print("Importing",col['yellow'](inputf),"...")
#print("Importing",col['yellow'](inputf),"...")
with open(inputf,'r') as inputfd:
data = json.load(inputfd)
@ -354,6 +378,7 @@ def parse_spotify(inputf):
print()
def parse_lastfm(inputf):
with open(inputf,'r',newline='') as inputfd:
@ -388,6 +413,44 @@ def parse_lastfm(inputf):
yield ('FAIL',None,f"{row} (Line {line}) could not be parsed. Scrobble not imported. ({repr(e)})")
continue
def parse_lastfm_ghan_json(inputf):
with open(inputf, 'r') as inputfd:
data = json.load(inputfd)
skip = 50000
for entry in data:
for track in entry['track']:
skip -= 1
#if skip: continue
#print(track)
#input()
yield ('CONFIDENT_IMPORT', {
'track_title': track['name'],
'track_artists': track['artist']['#text'],
'track_length': None,
'album_name': track['album']['#text'],
'scrobble_time': int(track['date']['uts']),
'scrobble_duration': None
}, '')
def parse_lastfm_ghan_csv(inputf):
with open(inputf, 'r') as inputfd:
reader = csv.DictReader(inputfd)
for row in reader:
yield ('CONFIDENT_IMPORT', {
'track_title': row['track'],
'track_artists': row['artist'],
'track_length': None,
'album_name': row['album'],
'scrobble_time': int(row['uts']),
'scrobble_duration': None
}, '')
def parse_listenbrainz(inputf):
with open(inputf,'r') as inputfd:

View File

@ -3,6 +3,7 @@ import os
from threading import Thread
from importlib import resources
import time
from magic import from_file
# server stuff
@ -12,14 +13,13 @@ from jinja2.exceptions import TemplateNotFound
# doreah toolkit
from doreah.logging import log
from doreah import auth
# rest of the project
from . import database
from .database.jinjaview import JinjaDBConnection
from .images import image_request
from .malojauri import uri_to_internal, remove_identical
from .pkg_global.conf import malojaconfig, data_dir
from .pkg_global.conf import malojaconfig, data_dir, auth
from .pkg_global import conf
from .jinjaenv.context import jinja_environment
from .apis import init_apis, apikeystore
@ -97,7 +97,7 @@ aliases = {
### API
auth.authapi.mount(server=webserver)
conf.auth.authapi.mount(server=webserver)
init_apis(webserver)
# redirects for backwards compatibility
@ -155,7 +155,8 @@ def static_image(pth):
@webserver.route("/cacheimages/<uuid>")
def static_proxied_image(uuid):
return static_file(uuid,root=data_dir['cache']('images'))
mimetype = from_file(os.path.join(data_dir['cache']('images'),uuid),True)
return static_file(uuid,root=data_dir['cache']('images'),mimetype=mimetype)
@webserver.route("/login")
def login():
@ -166,16 +167,16 @@ def login():
@webserver.route("/media/<name>.<ext>")
def static(name,ext):
assert ext in ["txt","ico","jpeg","jpg","png","less","js","ttf","css"]
with resources.files('maloja') / 'web' / 'static' as staticfolder:
response = static_file(ext + "/" + name + "." + ext,root=staticfolder)
staticfolder = resources.files('maloja') / 'web' / 'static'
response = static_file(ext + "/" + name + "." + ext,root=staticfolder)
response.set_header("Cache-Control", "public, max-age=3600")
return response
# new, direct reference
@webserver.route("/static/<path:path>")
def static(path):
with resources.files('maloja') / 'web' / 'static' as staticfolder:
response = static_file(path,root=staticfolder)
staticfolder = resources.files('maloja') / 'web' / 'static'
response = static_file(path,root=staticfolder)
response.set_header("Cache-Control", "public, max-age=3600")
return response
@ -197,7 +198,7 @@ def jinja_page(name):
if name in aliases: redirect(aliases[name])
keys = remove_identical(FormsDict.decode(request.query))
adminmode = request.cookies.get("adminmode") == "true" and auth.check(request)
adminmode = request.cookies.get("adminmode") == "true" and auth.check_request(request)
with JinjaDBConnection() as conn:
@ -222,7 +223,7 @@ def jinja_page(name):
return res
@webserver.route("/<name:re:admin.*>")
@auth.authenticated
@auth.authenticated_function()
def jinja_page_private(name):
return jinja_page(name)

View File

@ -1,14 +1,13 @@
import os
import shutil
import stat
from importlib import resources
try:
from setuptools import distutils
except ImportError:
import distutils
from doreah.io import col, ask, prompt
from doreah import auth
from pathlib import PosixPath
from .pkg_global.conf import data_dir, dir_settings, malojaconfig
from doreah.io import col, ask, prompt
from .pkg_global.conf import data_dir, dir_settings, malojaconfig, auth
@ -23,22 +22,39 @@ ext_apikeys = [
def copy_initial_local_files():
with resources.files("maloja") / 'data_files' as folder:
for cat in dir_settings:
distutils.dir_util.copy_tree(os.path.join(folder,cat),dir_settings[cat],update=False)
data_file_source = resources.files("maloja") / 'data_files'
for cat in dir_settings:
if dir_settings[cat] is None:
continue
if cat == 'config' and malojaconfig.readonly:
continue
# to avoid permission problems with the root dir
for subfolder in os.listdir(data_file_source / cat):
src = data_file_source / cat / subfolder
dst = PosixPath(dir_settings[cat]) / subfolder
if os.path.isdir(src):
shutil.copytree(src, dst, dirs_exist_ok=True)
# fix permissions (u+w)
for dirpath, _, filenames in os.walk(dst):
os.chmod(dirpath, os.stat(dirpath).st_mode | stat.S_IWUSR)
for filename in filenames:
filepath = os.path.join(dirpath, filename)
os.chmod(filepath, os.stat(filepath).st_mode | stat.S_IWUSR)
charset = list(range(10)) + list("abcdefghijklmnopqrstuvwxyz") + list("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
def randomstring(length=32):
import random
return "".join(str(random.choice(charset)) for _ in range(length))
def setup():
copy_initial_local_files()
SKIP = malojaconfig["SKIP_SETUP"]
try:
print("Various external services can be used to display images. If not enough of them are set up, only local images will be used.")
for k in ext_apikeys:
keyname = malojaconfig.get_setting_info(k)['name']
@ -46,9 +62,12 @@ def setup():
if key is False:
print(f"\tCurrently not using a {col['red'](keyname)} for image display.")
elif key is None or key == "ASK":
promptmsg = f"\tPlease enter your {col['gold'](keyname)}. If you do not want to use one at this moment, simply leave this empty and press Enter."
key = prompt(promptmsg,types=(str,),default=False,skip=SKIP)
malojaconfig[k] = key
if malojaconfig.readonly:
print(f"\tCurrently not using a {col['red'](keyname)} for image display - config is read only.")
else:
promptmsg = f"\tPlease enter your {col['gold'](keyname)}. If you do not want to use one at this moment, simply leave this empty and press Enter."
key = prompt(promptmsg,types=(str,),default=False,skip=SKIP)
malojaconfig[k] = key
else:
print(f"\t{col['lawngreen'](keyname)} found.")
@ -67,10 +86,10 @@ def setup():
if forcepassword is not None:
# user has specified to force the pw, nothing else matters
auth.defaultuser.setpw(forcepassword)
auth.change_pw(password=forcepassword)
print("Password has been set.")
elif auth.defaultuser.checkpw("admin"):
# if the actual pw is admin, it means we've never set this up properly (eg first start after update)
elif auth.still_has_factory_default_user():
# this means we've never set this up properly (eg first start after update)
while True:
newpw = prompt("Please set a password for web backend access. Leave this empty to generate a random password.",skip=SKIP,secret=True)
if newpw is None:
@ -81,7 +100,7 @@ def setup():
newpw_repeat = prompt("Please type again to confirm.",skip=SKIP,secret=True)
if newpw != newpw_repeat: print("Passwords do not match!")
else: break
auth.defaultuser.setpw(newpw)
auth.change_pw(password=newpw)
except EOFError:
print("No user input possible. If you are running inside a container, set the environment variable",col['yellow']("MALOJA_SKIP_SETUP=yes"))

View File

@ -75,7 +75,7 @@
<a href="/"><img style="display:block;" src="/favicon.png" /></a>
</div>
<div id="right-side">
<span><input id="searchinput" placeholder="Search for an artist or track..." oninput="search(this)" onblur="clearresults()" /></span>
<span><input id="searchinput" placeholder="Search for an album, artist or track..." oninput="search(this)" onblur="clearresults()" /></span>
</div>

View File

@ -6,6 +6,8 @@
Here you can find tracks that currently have no album.<br/><br/>
{% with list = dbc.get_tracks_without_album() %}
You have {{list|length}} tracks with no album.<br/><br/>
{% include 'partials/list_tracks.jinja' %}
{% endwith %}

View File

@ -15,7 +15,7 @@
var xhttp = new XMLHttpRequest();
xhttp.open("POST","/api/newrule?", true);
xhttp.open("POST","/apis/mlj_1/newrule?", true);
xhttp.send(keys);
e = arguments[0];
line = e.parentNode;
@ -25,7 +25,7 @@
function fullrebuild() {
var xhttp = new XMLHttpRequest();
xhttp.open("POST","/api/rebuild", true);
xhttp.open("POST","/apis/mlj_1/rebuild", true);
xhttp.send();
window.location = "/wait";

View File

@ -67,9 +67,9 @@
<li>manually scrobble from track pages</li>
<li>delete scrobbles</li>
<li>reparse scrobbles</li>
<li>edit tracks and artists</li>
<li>merge tracks and artists</li>
<li>upload artist and track art by dropping a file on the existing image on an artist or track page</li>
<li>edit tracks, albums and artists</li>
<li>merge tracks, albums and artists</li>
<li>upload artist, album and track art by dropping a file on the existing image on an artist or track page</li>
<li>see more detailed error pages</li>
</ul>

View File

@ -24,7 +24,7 @@
keys = "filename=" + encodeURIComponent(filename);
console.log(keys);
var xhttp = new XMLHttpRequest();
xhttp.open("POST","/api/importrules", true);
xhttp.open("POST","/apis/mlj_1/importrules", true);
xhttp.send(keys);
e.innerHTML = e.innerHTML.replace("Add","Remove");
@ -36,7 +36,7 @@
keys = "remove&filename=" + encodeURIComponent(filename);
var xhttp = new XMLHttpRequest();
xhttp.open("POST","/api/importrules", true);
xhttp.open("POST","/apis/mlj_1/importrules", true);
xhttp.send(keys);
e.innerHTML = e.innerHTML.replace("Remove","Add");
@ -56,7 +56,7 @@
If you use a Chromium-based browser and listen to music on Plex, Spotify, Soundcloud, Bandcamp or YouTube Music, download the extension and simply enter the server URL as well as your API key in the relevant fields. They will turn green if the server is accessible.
<br/><br/>
You can also use any standard-compliant scrobbler. For GNUFM (audioscrobbler) scrobblers, enter <span class="stats"><span name="serverurl">yourserver.tld</span>/apis/audioscrobbler</span> as your Gnukebox server and your API key as the password. For Listenbrainz scrobblers, use <span class="stats"><span name="serverurl">yourserver.tld</span>/apis/listenbrainz</span> as the API URL and your API key as token.
You can also use any standard-compliant scrobbler. For GNUFM (audioscrobbler) scrobblers, enter <span class="stats"><span name="serverurl">yourserver.tld</span>/apis/audioscrobbler</span> as your Gnukebox server and your API key as the password. For Listenbrainz scrobblers, use <span class="stats"><span name="serverurl">yourserver.tld</span>/apis/listenbrainz</span> as the API URL (depending on the implementation, you might need to add a <span class="stats">/1</span> at the end) and your API key as token.
<br/><br/>
If you use another browser or another music player, you could try to code your own extension. The API is super simple! Just send a POST HTTP request to

View File

@ -29,7 +29,7 @@
{% for entry in dbc.get_charts_albums(filterkeys,limitkeys,{'only_own_albums':False}) %}
{% if artist not in (entry.album.artists or []) %}
{% if info.artist not in (entry.album.artists or []) %}
{%- set cert = None -%}
{%- if entry.scrobbles >= settings.scrobbles_gold_album -%}{% set cert = 'gold' %}{%- endif -%}

View File

@ -20,11 +20,11 @@
<td class='searchProvider'>{{ links.link_search(entity) }}</td>
{% endif %}
<td class='track'>
<span class='artist_in_trackcolumn'>{{ links.links(entity.artists) }}</span> {{ links.link(entity) }}
<span class='artist_in_trackcolumn'>{{ links.links(entity.artists, restrict_amount=True) }}</span> {{ links.link(entity) }}
</td>
{% elif entity is mapping and 'albumtitle' in entity %}
<td class='album'>
<span class='artist_in_trackcolumn'>{{ links.links(entity.artists) }}</span> {{ links.link(entity) }}
<span class='artist_in_albumcolumn'>{{ links.links(entity.artists, restrict_amount=True) }}</span> {{ links.link(entity) }}
</td>
{% else %}
<td class='artist'>{{ links.link(entity) }}

View File

@ -8,9 +8,11 @@
<a href="{{ url(entity) }}">{{ name | e }}</a>
{%- endmacro %}
{% macro links(entities) -%}
{% macro links(entities, restrict_amount=False) -%}
{% if entities is none or entities == [] %}
{{ settings["DEFAULT_ALBUM_ARTIST"] }}
{% elif entities.__len__() > 3 and restrict_amount %}
{{ link(entities[0]) }} et al.
{% else %}
{% for entity in entities -%}
{{ link(entity) }}{{ ", " if not loop.last }}

View File

@ -363,12 +363,14 @@ div#notification_area {
right:20px;
}
div#notification_area div.notification {
background-color:white;
background-color:black;
width:400px;
min-height:50px;
margin-bottom:7px;
padding:9px;
opacity:0.4;
opacity:0.5;
border-left: 8px solid var(--notification-color);
border-radius: 3px;
}
div#notification_area div.notification:hover {
opacity:0.95;
@ -781,6 +783,9 @@ table.list td.artists,td.artist,td.title,td.track {
table.list td.track span.artist_in_trackcolumn {
color: var(--text-color-secondary);
}
table.list td.album span.artist_in_albumcolumn {
color: var(--text-color-secondary);
}
table.list td.searchProvider {
width: 20px;
@ -987,6 +992,7 @@ table.misc td {
div.tiles {
max-height: 600px;
display: grid;
grid-template-columns: repeat(18, calc(100% / 18));
grid-template-rows: repeat(6, calc(100% / 6));

View File

@ -22,8 +22,8 @@ div#startpage {
@media (min-width: 1401px) and (max-width: 2200px) {
div#startpage {
grid-template-columns: 45vw 45vw;
grid-template-rows: 45vh 45vh 45vh;
grid-template-columns: repeat(2, 45vw);
grid-template-rows: repeat(3, 45vh);
grid-template-areas:
"charts_artists lastscrobbles"

View File

@ -126,7 +126,7 @@ function scrobble(artists,title,albumartists,album,timestamp) {
lastArtists = artists;
lastTrack = title;
lastAlbum = album;
lastAlbumartists = albumartists;
lastAlbumartists = albumartists || [];
var payload = {
"artists":artists,
@ -186,7 +186,7 @@ function search_manualscrobbling(searchfield) {
else {
xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = searchresult_manualscrobbling;
xhttp.open("GET","/api/search?max=5&query=" + encodeURIComponent(txt), true);
xhttp.open("GET","/apis/mlj_1/search?max=5&query=" + encodeURIComponent(txt), true);
xhttp.send();
}
}

View File

@ -1,12 +1,14 @@
// JS for feedback to the user whenever any XHTTP action is taken
const colors = {
'warning':'red',
'error': 'red',
'warning':'#8ACC26',
'info':'green'
}
const notification_template = info => `
<div class="notification" style="background-color:${colors[info.notification_type]};">
<div class="notification" style="--notification-color: ${colors[info.notification_type]};">
<b>${info.title}</b><br/>
<span>${info.body}</span>
@ -35,18 +37,24 @@ function notify(title,msg,notification_type='info',reload=false) {
}
function notifyCallback(request) {
var body = request.response;
var response = request.response;
var status = request.status;
if (status == 200) {
var notification_type = 'info';
if (response.hasOwnProperty('warnings') && response.warnings.length > 0) {
var notification_type = 'warning';
}
else {
var notification_type = 'info';
}
var title = "Success!";
var msg = body.desc || body;
var msg = response.desc || response;
}
else {
var notification_type = 'warning';
var title = "Error: " + body.error.type;
var msg = body.error.desc || "";
var notification_type = 'error';
var title = "Error: " + response.error.type;
var msg = response.error.desc || "";
}

View File

@ -1,17 +1,23 @@
var searches = []
var searches = [];
var debounceTimer;
function search(searchfield) {
txt = searchfield.value;
if (txt == "") {
reallyclear()
}
else {
xhttp = new XMLHttpRequest();
searches.push(xhttp)
xhttp.onreadystatechange = searchresult
xhttp.open("GET","/api/search?max=5&query=" + encodeURIComponent(txt), true);
xhttp.send();
}
clearTimeout(debounceTimer);
debounceTimer = setTimeout(() => {
const txt = searchfield.value;
if (txt == "") {
reallyclear();
}
else {
const xhttp = new XMLHttpRequest();
searches.push(xhttp);
xhttp.onreadystatechange = searchresult
xhttp.open("GET","/apis/mlj_1/search?max=5&query=" + encodeURIComponent(txt), true);
xhttp.send();
}
}, 1000);
}

View File

@ -1,3 +1,3 @@
function upload(encodedentity,b64) {
neo.xhttprequest("/api/addpicture?" + encodedentity,{"b64":b64},"POST")
neo.xhttprequest("/apis/mlj_1/addpicture?" + encodedentity,{"b64":b64},"POST")
}

View File

@ -1,10 +1,10 @@
[project]
name = "malojaserver"
version = "3.2.2"
version = "3.2.4"
description = "Self-hosted music scrobble database"
readme = "./README.md"
requires-python = ">=3.10"
license = { file="./LICENSE" }
readme = "README.md"
requires-python = "==3.12.*"
license = { file="LICENSE" }
authors = [ { name="Johannes Krattenmacher", email="maloja@dev.krateng.ch" } ]
urls.repository = "https://github.com/krateng/maloja"
@ -19,31 +19,32 @@ classifiers = [
]
dependencies = [
"bottle>=0.12.16",
"waitress>=2.1.0",
"doreah>=1.9.4, <2",
"nimrodel>=0.8.0",
"setproctitle>=1.1.10",
#"pyvips>=2.1.16",
"jinja2>=3.0.0",
"lru-dict>=1.1.6",
"psutil>=5.8.0",
"sqlalchemy>=2.0",
"python-datauri>=1.1.0",
"requests>=2.27.1",
"setuptools>68.0.0"
"bottle==0.13.*",
"waitress==3.0.*",
"doreah==2.0.*",
"nimrodel==0.8.*",
"setproctitle==1.3.*",
"jinja2==3.1.*",
"lru-dict==1.3.*",
"psutil==5.9.*",
"sqlalchemy==2.0",
"python-datauri==3.0.*",
"python-magic==0.4.*",
"requests==2.32.*",
"toml==0.10.*",
"PyYAML==6.0.*"
]
[project.optional-dependencies]
full = [
"pyvips>=2.1"
"pyvips==2.2.*"
]
[project.scripts]
maloja = "maloja.__main__:main"
[build-system]
requires = ["flit_core >=3.2,<4"]
requires = ["flit_core >=3.10,<4"]
build-backend = "flit_core.buildapi"
[tool.flit.module]
@ -64,7 +65,8 @@ build =[
run = [
"python3",
"py3-lxml",
"tzdata"
"tzdata",
"libmagic"
]
opt = [
"vips"

View File

@ -1,12 +1,15 @@
bottle>=0.12.16
waitress>=2.1.0
doreah>=1.9.4, <2
nimrodel>=0.8.0
setproctitle>=1.1.10
jinja2>=3.0.0
lru-dict>=1.1.6
psutil>=5.8.0
sqlalchemy>=2.0
python-datauri>=1.1.0
requests>=2.27.1
setuptools>68.0.0
bottle==0.13.*
waitress==3.0.*
doreah==2.0.*
nimrodel==0.8.*
setproctitle==1.3.*
jinja2==3.1.*
lru-dict==1.3.*
psutil==5.9.*
sqlalchemy==2.0
python-datauri==3.0.*
python-magic==0.4.*
requests==2.32.*
toml==0.10.*
PyYAML==6.0.*

View File

@ -1,2 +1,2 @@
pyvips>=2.1
pyvips==2.2.*

View File

@ -32,14 +32,17 @@ Settings File | Environment Variable | Type | Description
`cache_expire_negative` | `MALOJA_CACHE_EXPIRE_NEGATIVE` | Integer | Days until failed image fetches are reattempted
`db_max_memory` | `MALOJA_DB_MAX_MEMORY` | Integer | RAM Usage in percent at which Maloja should no longer increase its database cache.
`use_request_cache` | `MALOJA_USE_REQUEST_CACHE` | Boolean | Use request-local DB Cache
`use_global_cache` | `MALOJA_USE_GLOBAL_CACHE` | Boolean | Use global DB Cache
`use_global_cache` | `MALOJA_USE_GLOBAL_CACHE` | Boolean | This is vital for Maloja's performance. Do not disable this unless you have a strong reason to.
**Fluff**
`scrobbles_gold` | `MALOJA_SCROBBLES_GOLD` | Integer | How many scrobbles a track needs to be considered 'Gold' status
`scrobbles_platinum` | `MALOJA_SCROBBLES_PLATINUM` | Integer | How many scrobbles a track needs to be considered 'Platinum' status
`scrobbles_diamond` | `MALOJA_SCROBBLES_DIAMOND` | Integer | How many scrobbles a track needs to be considered 'Diamond' status
`scrobbles_gold_album` | `MALOJA_SCROBBLES_GOLD_ALBUM` | Integer | How many scrobbles an album needs to be considered 'Gold' status
`scrobbles_platinum_album` | `MALOJA_SCROBBLES_PLATINUM_ALBUM` | Integer | How many scrobbles an album needs to be considered 'Platinum' status
`scrobbles_diamond_album` | `MALOJA_SCROBBLES_DIAMOND_ALBUM` | Integer | How many scrobbles an album needs to be considered 'Diamond' status
`name` | `MALOJA_NAME` | String | Name
**Third Party Services**
`metadata_providers` | `MALOJA_METADATA_PROVIDERS` | List | Which metadata providers should be used in what order. Musicbrainz is rate-limited and should not be used first.
`metadata_providers` | `MALOJA_METADATA_PROVIDERS` | List | List of which metadata providers should be used in what order. Musicbrainz is rate-limited and should not be used first.
`scrobble_lastfm` | `MALOJA_SCROBBLE_LASTFM` | Boolean | Proxy-Scrobble to Last.fm
`lastfm_api_key` | `MALOJA_LASTFM_API_KEY` | String | Last.fm API Key
`lastfm_api_secret` | `MALOJA_LASTFM_API_SECRET` | String | Last.fm API Secret
@ -55,6 +58,7 @@ Settings File | Environment Variable | Type | Description
`send_stats` | `MALOJA_SEND_STATS` | Boolean | Send Statistics
`proxy_images` | `MALOJA_PROXY_IMAGES` | Boolean | Whether third party images should be downloaded and served directly by Maloja (instead of just linking their URL)
**Database**
`album_information_trust` | `MALOJA_ALBUM_INFORMATION_TRUST` | Choice | Whether to trust the first album information that is sent with a track or update every time a different album is sent
`invalid_artists` | `MALOJA_INVALID_ARTISTS` | Set | Artists that should be discarded immediately
`remove_from_title` | `MALOJA_REMOVE_FROM_TITLE` | Set | Phrases that should be removed from song titles
`delimiters_feat` | `MALOJA_DELIMITERS_FEAT` | Set | Delimiters used for extra artists, even when in the title field
@ -62,14 +66,20 @@ Settings File | Environment Variable | Type | Description
`delimiters_formal` | `MALOJA_DELIMITERS_FORMAL` | Set | Delimiters used to tag multiple artists when only one tag field is available
`filters_remix` | `MALOJA_FILTERS_REMIX` | Set | Filters used to recognize the remix artists in the title
`parse_remix_artists` | `MALOJA_PARSE_REMIX_ARTISTS` | Boolean | Parse Remix Artists
`week_offset` | `MALOJA_WEEK_OFFSET` | Integer | Start of the week for the purpose of weekly statistics. 0 = Sunday, 6 = Saturday
`timezone` | `MALOJA_TIMEZONE` | Integer | UTC Offset
`location_timezone` | `MALOJA_LOCATION_TIMEZONE` | String | Location Timezone (overrides `timezone`)
**Web Interface**
`default_range_charts_artists` | `MALOJA_DEFAULT_RANGE_CHARTS_ARTISTS` | Choice | Default Range Artist Charts
`default_range_charts_tracks` | `MALOJA_DEFAULT_RANGE_CHARTS_TRACKS` | Choice | Default Range Track Charts
`default_range_startpage` | `MALOJA_DEFAULT_RANGE_STARTPAGE` | Choice | Default Range for Startpage Stats
`default_step_pulse` | `MALOJA_DEFAULT_STEP_PULSE` | Choice | Default Pulse Step
`charts_display_tiles` | `MALOJA_CHARTS_DISPLAY_TILES` | Boolean | Display Chart Tiles
`album_showcase` | `MALOJA_ALBUM_SHOWCASE` | Boolean | Display a graphical album showcase for artist overview pages instead of a chart list
`display_art_icons` | `MALOJA_DISPLAY_ART_ICONS` | Boolean | Display Album/Artist Icons
`default_album_artist` | `MALOJA_DEFAULT_ALBUM_ARTIST` | String | Default Albumartist
`use_album_artwork_for_tracks` | `MALOJA_USE_ALBUM_ARTWORK_FOR_TRACKS` | Boolean | Use Album Artwork for tracks
`fancy_placeholder_art` | `MALOJA_FANCY_PLACEHOLDER_ART` | Boolean | Use fancy placeholder artwork
`show_play_number_on_tiles` | `MALOJA_SHOW_PLAY_NUMBER_ON_TILES` | Boolean | Show amount of plays on tiles
`discourage_cpu_heavy_stats` | `MALOJA_DISCOURAGE_CPU_HEAVY_STATS` | Boolean | Prevent visitors from mindlessly clicking on CPU-heavy options. Does not actually disable them for malicious actors!
`use_local_images` | `MALOJA_USE_LOCAL_IMAGES` | Boolean | Use Local Images
`timezone` | `MALOJA_TIMEZONE` | Integer | UTC Offset
`time_format` | `MALOJA_TIME_FORMAT` | String | Time Format
`theme` | `MALOJA_THEME` | String | Theme

View File

@ -1,40 +0,0 @@
import setuptools
import toml
with open("pyproject.toml") as fd:
pkgdata = toml.load(fd)
projectdata = pkgdata['project']
# extract info
with open(projectdata['readme'], "r") as fh:
long_description = fh.read()
setuptools.setup(
name=projectdata['name'],
version=projectdata['version'],
author=projectdata['authors'][0]['name'],
author_email=projectdata['authors'][0]['email'],
description=projectdata["description"],
license="GPLv3",
long_description=long_description,
long_description_content_type="text/markdown",
url=projectdata['urls']['repository'],
packages=setuptools.find_packages("."),
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
"Operating System :: OS Independent",
],
python_requires=projectdata['requires-python'],
install_requires=projectdata['dependencies'],
package_data={'': ['*','*/*','*/*/*','*/*/*/*','*/*/.*','*/*/*/.*']},
include_package_data=True,
entry_points = {
'console_scripts':[
k + '=' + projectdata['scripts'][k] for k in projectdata['scripts']
]
}
)