Epskampie 3 days ago

> Requirements on the executing device: Docker is required.

  • arjvik 3 days ago

    Good friend built dockerc[1] which doesn't have this limitation!

    [1]: https://github.com/NilsIrl/dockerc

    • hnuser123456 3 days ago

      That screenshot in the readme is hilarious. Nice project.

    • ecnahc515 3 days ago

      Instead it requires QEMU!

    • remram 3 days ago

      I can't tell what this does from the readme. Does it package a container runtime in the exe? Or a virtual machine? Something else?

    • vinceguidry 3 days ago

      Looks like MacOS and Windows support is still being worked on.

    • ugh123 3 days ago

      lol guy makes a fair point. Open source software suffers from this expectation that anyone interested in the project must be technical enough to be able to clone, compile, and fix the inevitable issues just to get something running and usable.

      • Hamuko 3 days ago

        I'd say that a lot of people suffer from this expectation that just because I made a tool for myself and put it up on GitHub in case someone else would also enjoy it that I'm now obligated to provide support for you. Especially when the person in the screenshot is angry over the lack of a Windows binary.

      • dowager_dan99 3 days ago

        Thank goodness; solving this "problem" for the general internet destroyed it. Your point seems to be someone else should do that for every stupid asshole on the web?

    • dheera 3 days ago

      But will this run inside another docker container?

      I normally hate things shipped as containers because I often want to use it inside a docker container and docker-in-docker just seems like a messy waste of resources.

      • vinceguidry 3 days ago

        Docker in Docker is not a waste of resources, they just make the same container runtime the container is running on available to it. Really a better solution than a control plane like Kubernetes.

      • remram 3 days ago

        Docker is not emulation so there's no waste of resources.

      • rcfox 3 days ago

        Doesn't podman get around a lot of those issues?

        • dheera 3 days ago

          Aw hell, more band-aids because people don't want to get software distribution done right.

          Can we please go back to the days of sudo dpkg -i foo.deb and then just /usr/bin/foo ?

          • johnisgood 2 days ago

            I am still using "ar x" and "tar xvf" for .deb files on Void Linux, because some projects only release .deb files!

  • harha_ 3 days ago

    Yeah, it feels like nothing but a little trick. Why would anyone want to actually use this? The exe simply calls docker, it can embed an image into the exe but even then it first calls docker to load the embedded image.

    • jve 3 days ago

      I see a use case. The other day I wished that I could pack CLI commands as docker containers and execute them as CLI commands and get return codes.

      I haven't tried this stuff, but maybe this is something in that direction.

      • lelanthran 3 days ago

        > I see a use case. The other day I wished that I could pack CLI commands as docker containers and execute them as CLI commands and get return codes

        I don't understand this requirement/specification; presumably this use-case will not be satisfied by a shell script, but I don't see how.

        What are you wanting from this use-case that can't be done with a shell script?

        • lazide 3 days ago

          Presumably, they don’t want to write/maintain a shell script wrapper for every time they want to do this, when they could use a tool which does it for them.

          • lelanthran 3 days ago

            > Presumably, they don’t want to write/maintain a shell script wrapper for every time they want to do this, when they could use a tool which does it for them.

            How's "packing" cli commands into a shell script any different from "packing" CLI commands into a container?

            • lazide 3 days ago

              Calling a container on the CLI is a pain in the ass.

              People generally don’t put stuff that works in whatever environment you’re in on the CLI already into contains. Stuff that doesn’t, of course they do.

              Having a convenient shell script wrapper to make that not a pain in the ass, while letting all the environment management stuff still work correctly in a container is convenient.

              Writing said wrapper each time, however is a pain in the ass.

              Generating one, makes it not such a pain in the ass to use.

              So then you get convenient CLI usage of something that needs a container to not be a pain in the ass to install/use.

        • james_marks 3 days ago

          An icon a non-technical user can click to run it.

          • cmeacham98 3 days ago

            A non-technical user that has docker installed?

      • matsemann 3 days ago

        I do that for a lot of stuff. Got a bit annoyed with internal tools that was so difficult to set up (needed this exact version of global python, expected this and that to be in the path, constantly needed to be updated and then stuff broke again). So I built a docker image instead where everything is managed, and when I need to update or change stuff I can do it from a clean slate without affecting anything else on my computer.

        To use it, it's basically just scripts loaded into my shell. So if I do "toolname command args" it will spin up the container, mount the current folder and some config folders some tools expect, forward some ports, then pass the command and args to the container which runs them.

        99% of the time it works smooth. The annoying part is if some tool depends on some other tool on the host machine. Like for instance it wants to do some git stuff. I will then have to have git installed and my keys copied in as well for instance.

        • endofreach 3 days ago

          > my keys copied in as well for instance.

          Tip: you could also forward your ssh agent. I remember it was a bit of a pain in the ass on macos and a windows WSL2 setup, but likely worth it for your setup.

    • johncs 3 days ago

      Basically the same as Python’s zipapps which have some niche use cases.

      Before zipapp came out I built superzippy to do it. Needed to distribute some python tooling to users in a university where everyone was running Linux in lab computers. Worked perfectly for it.

    • j45 3 days ago

      Could be ease of use for end users who don't docker.

      • worldsayshi 3 days ago

        But now you have two problems.

        • throwanem 3 days ago

          The first of which can be p90 solved by "Okay, type 'apt install dash capital why docker return,' tell me what happens...okay, and 'docker dash vee' says...great! Now..."

          Probably takes a couple minutes, maybe less if you've got a good fast distro mirror nearby. More if you're trying to explain it to a biologist - love those folks, they do great work, incredible parties, not always at home in the digital domain.

  • alumic 3 days ago

    I was so blown away by the title and equally disappointed to discover this line.

    Pack it in, guys. No magic today.

    • stingraycharles 3 days ago

      Thank god there’s still this project that can build single executables that work on multiple OS’es, I’m still amazed by that level of magic.

  • Hamuko 3 days ago

    I feel like it's much easier to send a docker run snippet than an executable binary to my Docker-using friends. I usually try to include an example `docker run` and/or Docker Compose snippet in my projects too.

  • drawfloat 3 days ago

    Is there any alternative way of achieving a similar goal (shipping a container to non technical customers that they can run as if it were an application)?

    • regularfry 3 days ago

      It feels like there ought to be a way to wrap a UML kernel build with a container image. Never seen it done, but I can't think of an obvious reason why it wouldn't work.

    • mrbluecoat 3 days ago

      See the dockerc comment above

dennydai 3 days ago

Just use shebang

https://news.ycombinator.com/item?id=38987109

#!/usr/bin/env -S bash -c "docker run -p 8080:8080 -it --rm \$(docker build --progress plain -f \$0 . 2>&1 | tee /dev/stderr | grep -oP 'sha256:[0-9a-f]*')"

rullopat 3 days ago

It's great for sending your 6 GB hello world exe to your friends I suppose

  • xandrius 3 days ago

    The beauty of docker is that it is a reflection of how much someone cares about deployments: do you care about being efficient? You can use `scratch` or `X-alpine`. Do you simply not care and just want things to work? Always go for `ubuntu` and you're good to go!

    You can have a full and extensive api backend in golang, having a total image size of 5-6MB.

    • hereonout2 3 days ago

      I've done both, tiny scratch based images with a single go binary to full fat ubuntu based things.

      What is killing me at the moment is deploying Docker based AI applications.

      The CUDA base images come in at several GB to start with, then typically a whole host of python dependencies will be added with things like pytorch adding almost a GB of binaries.

      Typically the application code is tiny as it's usually just python, but then you have the ML model itself. These can be many GB too, so you need to decide whether to add it to the image or mount it as a volume, regardless it needs to make it's way onto the deployment target.

      I'm currently delivering double digit GB docker images to different parts of my organisation which raises eyebrows. I'm not sure a way around it though, it's less a docker problem and more an AI / CUDA issue.

      Docker fits current workflows but I can't help feeling having custom VM images for this type of thing would be more efficient.

      • kevmo314 3 days ago

        PyTorch essentially landed on the same bundling CUDA solution, so you're at least in good company.

        • hereonout2 3 days ago

          Yep, then I have some projects that have pytorch dependencies which use it's own bundled CUDA and non-pytorch dependencies that use a CUDA in the usual system wide include path.

          So CUDA gets packaged up in the container twice unless I start building everything from source or messing about with RPATHs!

    • endofreach 3 days ago

      > You can have a full and extensive api backend in golang, having a total image size of 5-6MB.

      So people are building docker "binaries", that depend on docker installed on the host, to run a container inside a container on the host– or even better, on a non-linux host, all of that then runs in a VM on the host... just... to run a golang application that is... already compiled to a binary?

      • xandrius 2 days ago

        Sure but a Docker setup is more than just running the binary. You have setup configs, env vars, external dependencies, and all executed in the same way.

        Of course you can do it directly on the machine but maybe you don't need containers then.

        In the same vein: people put stuff within a box, which is then put within another bigger box, inside a metal container, on top on another floating container. Why? Well, for some that's convenient.

    • anthk 3 days ago

      Golang should not need docker. It's statically built.

      • hereonout2 3 days ago

        Docker / containers are more than just that though. Using it allows your golang process to be isolated and integrated into the rest of your tooling, deployment pipelines, etc.

        • anthk 3 days ago

          It's go; that could be trivially done with a script.

          Heck, you can even cross compile go code for any architecture to another one (even for different OSes), and docker would be useless there unless docker has mechanisms to bind qemu-$ARCH with containers and binfmt.

          • jjice 3 days ago

            I'd argue that having it in a Docker container is much easier to integrate with the rest of many people's infra. On ECS, K8s, or similar? Docker is such an easy layer to slap on and it'll fit in easily in that situation.

            Are you running on bare servers? Sure, a Go binary and a script is fine.

            • hereonout2 3 days ago

              Yep, it's using docker as a means of delivery really. Especially in larger organisations this is just the done thing now.

              I understand what the OP is saying but not sure they get this context.

              If I were working in that world still I might have that single binary, and a script, but I'm old school and would probably make an RPM package and add a systemd unit file and some log rotate configs too!

cik 3 days ago

It sounds like docker export and makeself combined. We already ship to select customers prebuilt containers exactly this way.

aussieguy1234 3 days ago

On Linux, there would be little to no performance penalty to something like this since Docker is just fancy chroot, re using the same kernel as the host.

But not on other platforms. They are the same but run Linux in a VM.

ransom1538 3 days ago

Ah finally. We have finished where we started.

  • blueflow 3 days ago

    That was my first thought. Back in the days you gave your friends a stand-alone *.COM program on a floppy. We have come full circle on static linking.

rietta 3 days ago

I remember thinking that the Visual Basic runtime was unacceptable bloat overhead and now this. Cool work though. Also reminds me of self extracting WinZip files.

  • sitkack 3 days ago

    At some point in the future we will be nostalgic for the monstrosities of the present.

    • int_19h 3 days ago

      "ChatGPT, execute this natural language description of what the program should do"

  • int_19h 3 days ago

    I remember those times, as well. There was an amusing (in retrospect) period in late 90s - early 00s where one of the metrics for RAD tools of the day was how large a hello world type app is. Delphi was so popular then in part because it did very well on that metric - the baseline was on the order of 300 Kb, if I remember correctly, and you could have fairly complicated apps under 1 Mb. Visual Basic was decidedly meh on that count because between your EXE and MSVBVM60.DLL, it wouldn't fit on a single 1.44 Mb floppy.

kkapelon 3 days ago

This is just a simpler wrapper over the docker executable that you need to have installed anyway.

nine_k 3 days ago

Tired: docker run.

Wired: docker2exe.

Inspired: AppImage.

(I'll show myself out.)

hda111 3 days ago

Why? Would be easier to embed both podman and the image in one executable to create a self-contained file. No docker needed.

arjav0703 3 days ago

This is useful if you want to share your container (probably something that is prod ready) to someone who knows nothing about docker. An usecase would be, you built a custom software for someone's business/usecase and they are the only one using that particular container.

fifilura 3 days ago

Docker is mostly backend, but I wonder how far we are from universally executable native applications?

I.e. download this linux/mac/windows application to your windows/linux/mac computer.

Double-click to run.

Seems like all bits and pieces are already there, just need to put them together.

  • Piskvorrr 3 days ago

    The devil is in the details.

    What do you mean, "requires Windows 11"? What is even "glibc" and why do I need a different version on this Linux machine? How do I tell that the M4 needs an "arm64", why not a leg64 and how is this not amd64?

    In other words, it's very simple in theory - but the actual landscape is far, FAR more fragmented than a mere "that's a windows/linux/mac box, here's a windows/linux/mac executable, DONE"

    (And that's for an application without a GUI.)

    • fifilura 3 days ago

      Yes, it is difficult, but difficult problems have been solved before.

      With dependency management systems, docker, package managers.

      MacOS and Windows is closed source and that is of course a problem, I guess the first demo would be universally runnable linux executable on Windows.

      • Piskvorrr a day ago

        I have been trying. As I may not have been entirely clear the first time:

        It's not that hard to wrap your python/java/whatever app in a polyglot executable that will run on your Linux box, on your Mac, and on your Windows box. Here's a much harder target: "I would like to take this to any of such boxes, of reasonably vanilla config, and get it to run there, or at least crawl. 'Start and catch fire' doesn't count, 'exit randomly' doesn't count." The least problematic way to do this is "assume Java", and even that is wildly unsuccessful (versions and configs and JVMs, oh my!). The second least problematic is "webpage" (unless you are trying to interact with any hardware).

        The differences in boxes within an OS are often as large as differences across OSes. Docker was supposed to help with this by "we'll ship your box then," and while the idea works great, the assumption "there's already a working Docker, and/or you can just drop a working Docker" is...not great: you just push everything up a level of abstraction, yet end up with the original problem unsolved and unchanged. (There's an actual solution "ship the whole box, hardware and everything," but the downsides are obvious)

  • lucasoshiro 3 days ago

    > universally executable native applications

    To achieve that you'll need some kind of compatibility layer. Perhaps something like wine? Or WSL? Or a VM?

    Then you'll have what we already have with JVM and similar

  • woodrowbarlow 3 days ago

    https://justine.lol/ape.html -- αcτµαlly pδrταblε εxεcµταblε

    this works for actual compiled code. no vm, no runtime, no interpreter, no container. native compiled machine code. just download and double-click, no matter which OS you use.

    • Piskvorrr 3 days ago

      "Please note this is intended for people who don't care about desktop GUIs, and just want stdio and sockets without devops toil."

      • woodrowbarlow 2 days ago

        cosmopolitan-libc has aspirations (but not concrete plans) to add SDL interfaces for all supported platforms. this would allow APE executables to compile-in cross-platform UI toolkits like QT.

        • Piskvorrr a day ago

          Aspirations are nice and all. Java's had them for three decades now.

  • ivewonyoung 3 days ago

    How different would that be from Flatpak

    • fifilura 3 days ago

      Does it make linux applications run on Windows or mac?

PicassoCTs 3 days ago

So, does this work with a dockerswarm? As in a whole services swarm- get converted down into a monolith?

sunrunner 3 days ago

I'm just as disappointed as I was when I first heard about being able to create 'Self-contained Executable Programs with Deno Compile', perhaps slightly more even as at least that bundled the interpreter.

In all seriousness, Docker as a requirement for end-users to create an executable seems like a 'shift-right' approach to deployment effort, as in, instead of doing the work to make a usable standalone executable, a bunch of requirements for users are just pushed on to them. In some cases your users might be technical, but even then Docker only seems to makes sense when its kept inside an environment where the assumption of a container runtime is there.

I assume extra steps are needed to allow the 'executable' to access filesystem resources, making it sandboxed but not in a way that's helpful for end users?

  • 7bit 3 days ago

    Docker as a requirement for end-users is terrible no matter what.

  • wojtek1942 3 days ago

    Why the disappointment with Deno compile? I have not used it but from the website it seems that the end user does not need Deno to be installed. What is the shortcoming you are referring to?

    • sunrunner 3 days ago

      It's not a fair comparison on my part but before reading through the docs some of the initial wording around Deno compile seemed to imply (or I inferred) that a platform native executable would be produced from the process. Wishful thinking on my part I guess.

      Other languages like Golang making it relatively easy to build _native_ programs and to cross-compile them makes it a solid choice CLI tools, and I was genuinely hoping that more tooling like that was coming to other ecosystems. Perhaps naive to expect a shift like that for a language that's always been interpreted, but I like when I can run developer tools as native programs instead of ending up with various versions of a runtime installed (npx doesn't _solve_ this problem, merely works around it).

revskill 3 days ago

So baiscally i could bundle the linux os as an exe and run in windows.

  • RachelF 3 days ago

    you need "Docker for Windows", which runs a Linux VM which then runs Docker.

    • revskill 3 days ago

      It is a joke.

      • sitkack 3 days ago

        When it happens, will it be more or less funny?

Alex_001 3 days ago

This is super cool — especially for sharing tools with non-technical users or bundling CLIs without asking people to install Docker. Packaging infra-heavy apps into a simple .exe could really smooth out distribution. Curious how it handles startup time and embedded filesystem size.

  • shric 3 days ago

    > or bundling CLIs without asking people to install Docker.

    Except it requires people to install Docker.