For me this opens the question of are there any good remote desktop solutions for multiuser systems around. Rustdesk is single user, TurboVNC works, but there can be lag.
These days I have a Docker container with Remmina that I use as a bastion (fronted by Cloudflare and Authelia for OIDC), but everything else is LXC with xrdp and hardware acceleration for both application rendering and desktop streaming (xorgxrdp-glamor is much, MUCH better than VNC).
I am, however, struggling to find a good way to stream Wayland desktops over RDP.
A fun idea might be to combine something like this with Tailscale & their Mullvad add-on, so you get ephemeral browsing environments with VPN connectivity, could make it easy to test from various countries simultaneously on a single host.
Samsung DEX had a Linux desktop package in 2018. It was a lxd container based on Ubuntu 16.04. They developed it in collaboration with Canonical. Unfortunately they deprecated it shortly after, maybe already in 2018. The next Android update would remove it.
It worked but Android killed it mercilessly if it used too much memory or the rest of the system needed it.
I develop my apps in the most possible native way I can: deb packages, apt repo, systemd, journald etc. however I would like to also be able to run it in docker/vm. Is there a good systemd-in-docker solution for this to basically not run anything differently and not have to maintain two sets of systems?
Containers with systemd as an init process are considered first-class citizen by the Podman ecosystem (the base images are named accordingly: e.g, ubi10-init vs ubi10)
Have you looked at systemd-nspawn[0]? Its not docker so it wouldn't be useful for writing Dockerfiles but it is light containers that work beautifully with systemd.
Thanks, this looks awesome! Will play around on my CI/CD first to see if it's any good for the build-server to add trixie builds. Might use in prod deploys later.
You could use Nix to build the package and provide a nixos module and a docker image from the same derivation. Now you only have to manage three systems instead of two. /s
On Windows, doesn't this technically mean OP is running Linux inside a Linux VM inside Windows? From what I understand Docker is Linux tech and to use it anywhere else a (small) Linux VM is required. If true, I would just dispense with the extra layer and just run a Linux VM. Not to discourage experimentation though!
For one thing, Docker is not really "Linux inside Linux". It uses Linux kernel features to isolate the processes inside a container from those outside. But there is only one Linux kernel which is shared by both the container and its host (within the Linux VM, in this case).
For another, running Linux containers in a Linux VM on Windows is one (common) way that Docker can work. But it also supports running Windows containers on Windows, and in that case, the Windows kernel is shared just like in the Linux case. So Docker is not exactly "Linux tech".
I think GP is likely referring to Docker Desktop, which is probably the most common way to use Docker on Windows.
Running Linux containers using Docker Desktop has a small Linux VM in which the containers are run and then Docker does some mucking about to integrate that better with the Windows host OS.
I thought docker only supports windows as a host if you enable wsl, in which case you're running on hyper v and Linux kernel as part of wsl2, so absolutely Linux tech on a Linux vm on Windows... Am I wrong?
You are. You can run Docker for Windows, and run Windows binaries in reasonably isolated containers, without involving Linux at all [1]. Much like you run Linux containers on Linux without involving Windows.
It's Docker Desktop what assumes WSL; Docker engine does not. Also, you seem to need Windows Server; IDK if it can be made to work on a Pro version.
Docker supports either hyper-v, or wsl2 as a host for the Linux kernel - they generally push people towards wsl2. I vaguely recall wsl2 uses a subset of hyper-v the name of which escapes me atm.
I desperately wish I could run docker properly (CLI) on the Mac rather than use docker desktop, and while we are making a dream list, can I just run Ubuntu on the Mac mini?
I've been experimenting with it in macOS 15, and I was able to replace Colima entirely for my purposes. Running container images right off of Docker Hub, without Docker / Podman / etc.
(And yes, it is using a small Linux VM run under Apple's HyperKit.)
I ran into various issues I think, but my main objective was running a full k3s cluster this way, reckon this is achievable with full networking support now?
Also if I already had colima setup, does new apple container provide any benefits beyond just being made by apple?
I just carry around a pwnagotchi on a keychain, and use my iPad to access it to do Linux development work, including run a full raspian desktop, dev tools, etc.
Does anybody have a good writeup/tutorial on doing similar things with Wayland? From my limited knowledge that might be with RDP instead, but there hasn't been anything more distilled as far as I know?
I've also done xpra in docker before; that's always felt as hacky as it sounds though.
I don't use it much, but I've glued together sway+wayvnc+novnc in a container and it worked fine (exposing both raw VNC and the webified novnc interface).
I run full-headed Puppeteer sessions in Docker, with VNC for debugging and observation. I keep the instances as light as I can, but I suspect I'm most of the way there toward a "full" desktop experience. Probably just need to add a more full-featured window manager (currently I use fluxbox)
For me this opens the question of are there any good remote desktop solutions for multiuser systems around. Rustdesk is single user, TurboVNC works, but there can be lag.
I’ve done it for almost a decade now, to the point of packaging “stacks” inside Docker for specific tasks: https://github.com/rcarmo/azure-toolbox
These days I have a Docker container with Remmina that I use as a bastion (fronted by Cloudflare and Authelia for OIDC), but everything else is LXC with xrdp and hardware acceleration for both application rendering and desktop streaming (xorgxrdp-glamor is much, MUCH better than VNC).
I am, however, struggling to find a good way to stream Wayland desktops over RDP.
Can you not use the X11 server packaged with WSL as your display driver, and avoid piping this all into the web browser?
Seems very inefficient to have to render everything through the browser
A fun idea might be to combine something like this with Tailscale & their Mullvad add-on, so you get ephemeral browsing environments with VPN connectivity, could make it easy to test from various countries simultaneously on a single host.
Samsung DEX had a Linux desktop package in 2018. It was a lxd container based on Ubuntu 16.04. They developed it in collaboration with Canonical. Unfortunately they deprecated it shortly after, maybe already in 2018. The next Android update would remove it.
It worked but Android killed it mercilessly if it used too much memory or the rest of the system needed it.
I still remember how much I liked the idea. Really tried to use it, but the experience with both browsers and vscode was....not that great.
Kinda hope they revisit this idea in a near future again
I use this https://www.reddit.com/r/selfhosted/comments/13e25l9/tutoria...
My clients are a rpi 4 and an older ipad. Sometimes use an Android phone as well.Works really well.
> Google acts as a meet-me point and also provides the authentication mechanisms including MFA.
On one hand, it made me chuckle a bit. On the other hand, it could be reasonable in many scenarios.
I run my server on a connection that's a cgnat and nat by home router. So, no option for me other than chrome remote desktop. It also does p2p.
If you create an outbound tunnel, your options are whatever you want. nat and cgnat only affect inbound routing.
check into tailscale or cloudflare tunnels/argo
I develop my apps in the most possible native way I can: deb packages, apt repo, systemd, journald etc. however I would like to also be able to run it in docker/vm. Is there a good systemd-in-docker solution for this to basically not run anything differently and not have to maintain two sets of systems?
Containers with systemd as an init process are considered first-class citizen by the Podman ecosystem (the base images are named accordingly: e.g, ubi10-init vs ubi10)
Have you looked at systemd-nspawn[0]? Its not docker so it wouldn't be useful for writing Dockerfiles but it is light containers that work beautifully with systemd.
[0] https://wiki.archlinux.org/title/Systemd-nspawn
Thanks, this looks awesome! Will play around on my CI/CD first to see if it's any good for the build-server to add trixie builds. Might use in prod deploys later.
https://github.com/Azure/dalec
Build system packages and containers from those packages for a given target distro.
Behind the scenes it uses buildkit, so it's no extra stuff you need, just docker (or any buildkit daemon).
You might be better served by Incus/LXD which run "Linux containers" (ie: a full distro including systemd, SSH etc) as opposed to OCI containers.
You could use Nix to build the package and provide a nixos module and a docker image from the same derivation. Now you only have to manage three systems instead of two. /s
On Windows, doesn't this technically mean OP is running Linux inside a Linux VM inside Windows? From what I understand Docker is Linux tech and to use it anywhere else a (small) Linux VM is required. If true, I would just dispense with the extra layer and just run a Linux VM. Not to discourage experimentation though!
Almost.
For one thing, Docker is not really "Linux inside Linux". It uses Linux kernel features to isolate the processes inside a container from those outside. But there is only one Linux kernel which is shared by both the container and its host (within the Linux VM, in this case).
For another, running Linux containers in a Linux VM on Windows is one (common) way that Docker can work. But it also supports running Windows containers on Windows, and in that case, the Windows kernel is shared just like in the Linux case. So Docker is not exactly "Linux tech".
I think GP is likely referring to Docker Desktop, which is probably the most common way to use Docker on Windows.
Running Linux containers using Docker Desktop has a small Linux VM in which the containers are run and then Docker does some mucking about to integrate that better with the Windows host OS.
I thought docker only supports windows as a host if you enable wsl, in which case you're running on hyper v and Linux kernel as part of wsl2, so absolutely Linux tech on a Linux vm on Windows... Am I wrong?
You are. You can run Docker for Windows, and run Windows binaries in reasonably isolated containers, without involving Linux at all [1]. Much like you run Linux containers on Linux without involving Windows.
It's Docker Desktop what assumes WSL; Docker engine does not. Also, you seem to need Windows Server; IDK if it can be made to work on a Pro version.
[1]: https://learn.microsoft.com/en-us/virtualization/windowscont...
Docker supports either hyper-v, or wsl2 as a host for the Linux kernel - they generally push people towards wsl2. I vaguely recall wsl2 uses a subset of hyper-v the name of which escapes me atm.
Can he install Wine in the Docker container to run Windows games from it?
Isn’t this the case on macOS too?
I desperately wish I could run docker properly (CLI) on the Mac rather than use docker desktop, and while we are making a dream list, can I just run Ubuntu on the Mac mini?
I’ve been using colima for cli docker on my arm mac. It’s pretty straightfirward using homebrew.
Colima is great. However, in the upcoming macOS 26 Tahoe, and mostly in macOS 15 Sequoia, Apple is beginning to provide a first-party solution:
https://github.com/apple/container
I've been experimenting with it in macOS 15, and I was able to replace Colima entirely for my purposes. Running container images right off of Docker Hub, without Docker / Podman / etc.
(And yes, it is using a small Linux VM run under Apple's HyperKit.)
I ran into various issues I think, but my main objective was running a full k3s cluster this way, reckon this is achievable with full networking support now? Also if I already had colima setup, does new apple container provide any benefits beyond just being made by apple?
Try Orb docker. It is fast. It ha a Kubernetes cluster feature.
It might not be Ubuntu but Asahi Linux runs Fedora pretty well on M2 Pro and older Apple Silicon Mac Minis: https://asahilinux.org/fedora/#device-support
https://ubuntuasahi.org/
Related posts: - [How to Run GUI Applications Directly in Containers](https://github.com/hemashushu/docker-archlinux-gui) - [GUI Application Development Environment in a Container](https://github.com/hemashushu/docker-archlinux-gui-devel)
I just carry around a pwnagotchi on a keychain, and use my iPad to access it to do Linux development work, including run a full raspian desktop, dev tools, etc.
I’m a dummy. Can you explain your setup? How does the Pi fit on keychain?
I searched for the term and it seems to be a DIY kit to do reinforcement learning to try to crack WPA keys?
Does anybody have a good writeup/tutorial on doing similar things with Wayland? From my limited knowledge that might be with RDP instead, but there hasn't been anything more distilled as far as I know?
I've also done xpra in docker before; that's always felt as hacky as it sounds though.
I don't use it much, but I've glued together sway+wayvnc+novnc in a container and it worked fine (exposing both raw VNC and the webified novnc interface).
That sounds useful, do you have the Dockerfile for it pushed anywhere?
Here ya go:)
https://gitlab.com/yjftsjthsd-g/docker_sway-vnc
Sheesh. Just use LXC.
This is the first thing that came to my mind. Why pick an OCI container instead of an LXC container since it's a stateful workload?
Going OCI here only makes sense for temporary, disposable sessions.
I run full-headed Puppeteer sessions in Docker, with VNC for debugging and observation. I keep the instances as light as I can, but I suspect I'm most of the way there toward a "full" desktop experience. Probably just need to add a more full-featured window manager (currently I use fluxbox)
What's the best way to forward the x-server over ssh?
ssh -YC user@$host
Has worked since .... forever. Interestingly, it works with WSL2 on windows, too!
I run Arch under WSL2 and then in ~/.bashrc:
WINDOWS_IP=$(ip route | awk '/^default/ {print $3}')
DISPLAY="$WINDOWS_IP:0"
Now I can use the mighty mobaxterm from https://www.mobatek.net to just run whatever and pipe it back to Windows.
One caveat is that the $PATH gets polluted with space characters by 'Doze, so I have to do something like this for QGIS:
PATH=/usr/local/sbin:/usr/local/bin:/usr/bin qgis -n &
This sounds interesting. But I don’t fully follow?
What are your use cases? To run Linux GUI apps?
Does mobaxterm allow you to view those GUI apps?