我,我没有团队。好吧……我这样做:每次我发表文章时,都会有一大群人通过报告拼写错误、不准确等来弥补编辑的缺席。但是所有代码方面都只有我一个人。而且我有几台计算机可以胜任定期编译一公吨 Rust 的任务,就像我倾向于做的那样。他们都没有在桌面上运行 Linux,虽然我专门为 Linux 提供东西(包括我的日常工作和我的副业),但这不是 VM 无法解决的问题,而且我有很多这样的东西。
此外,我也喜欢能够从不同的地方写作,所以这意味着要有两台电脑,并使用一台(笔记本电脑)远程连接到另一台(台式机),这使得能源+热量问题变得更糟,而且现在我们会遇到这样的麻烦 “当您通过 SSH 连接到在其上运行的 Linux VM 时,您如何设法使桌面保持清醒状态,但当它不在时使其迅速进入睡眠状态”。
$ fly vm status fdca430e Instance ID = fdca430e Process = Version = 0 Region = cdg Desired = run Status = running Health Checks = 1 total, 1 passing Restarts = 0 Created = 7m10s ago
Recent Events TIMESTAMP TYPE MESSAGE 2022-06-18T16:05:52Z Received Task received by client 2022-06-18T16:05:52Z Task Setup Building Task Directory 2022-06-18T16:06:00Z Started Task started by client
Checks ID SERVICE STATE OUTPUT 3df2415693844068640885b45074b954 tcp-8080 passing TCP connect 172.19.2.2:8080: Success
Recent Logs
所以,是的,那是经典的飞翔!
我们fly regions set可以决定我们的应用程序应该在哪里运行,fly scale count我们可以更改正在运行的实例数,fly scale vm我们可以切换 VM 类型(现在非常简单),例如,这是我必须我用来做视频的:
$ fly status App Name = tube Owner = personal Version = 164 Status = running Hostname = tube.fly.dev
Instances ID PROCESS VERSION REGION DESIRED STATUS HEALTH CHECKS RESTARTS CREATED c1f4d89e app 164 sjc run running 0 2022-06-14T22:02:22Z b74afb02 app 164 yyz run running 0 2022-05-09T21:07:53Z 8b5ca0c7 app 164 gru run running 0 2022-05-09T21:07:15Z 0b08b59c app 164 ams run running 0 2022-05-09T21:06:30Z 6389589a app 164 cdg run running 0 2022-05-09T21:05:42Z ea94e5ef app 164 nrt run running 0 2022-05-09T21:03:21Z 79ecda2b app 164 iad run running 1 2022-05-09T21:02:51Z 26ea7a65 app 164 yyz run running 0 2022-05-09T21:02:10Z
$ fly scale show VM Resources for tube VM Size: shared-cpu-1x VM Memory: 512 MB Count: 8 Max Per Region: Not set
$ fly volumes list ID STATE NAME SIZE REGION ZONE ATTACHED VM CREATED AT vol_18l524y8j0er7zmp created tubecache 40GB ams 8aba 0b08b59c 1 month ago vol_18l524y8j5jr7zmp created tubecache 40GB yyz d33c 26ea7a65 1 month ago vol_okgj54580lq4y2wz created tubecache 40GB iad ddf7 1 month ago vol_x915grnzw8krn70q created tubecache 40GB nrt 0e0f ea94e5ef 1 month ago vol_ke628r68g3n4wmnp created tubecache 40GB sjc c0a5 c1f4d89e 1 month ago vol_02gk9vwnej1v76wm created tubecache 40GB cdg 0e8c 6389589a 1 month ago vol_8zmjnv8em85vywgx created tubecache 40GB yyz 5e29 b74afb02 1 month ago vol_ypkl7vz8k5evqg60 created tubecache 40GB iad f6cb 79ecda2b 1 month ago vol_0nylzre12814qmkp created tubecache 40GB gru 2824 8b5ca0c7 1 month ago vol_52en7r1jpl9rk6yx created tubecache 40GB syd 039e 1 month ago vol_w1q85vgn7jj4zdxe created tubecache 40GB lhr ad0e 1 month ago
* OpenSSL SSL_connect: Connection reset by peer in connection to hello-axum.fly.dev:443 * Closing connection 0 curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection to hello-axum.fly.dev:443
$ time bash -c 'fly scale count 1; while true; do curl https://hello-axum.fly.dev --max-time 1 && exit 0 || echo "still starting..."; done' Count changed to 1 curl: (28) Operation timed out after 1000 milliseconds with 0 out of 0 bytes received still starting... curl: (28) Operation timed out after 1001 milliseconds with 0 out of 0 bytes received still starting... curl: (28) Operation timed out after 1000 milliseconds with 0 out of 0 bytes received still starting... curl: (28) Operation timed out after 1001 milliseconds with 0 out of 0 bytes received still starting... curl: (28) Operation timed out after 1001 milliseconds with 0 out of 0 bytes received still starting... curl: (28) Operation timed out after 1000 milliseconds with 0 out of 0 bytes received still starting... curl: (28) Operation timed out after 1000 milliseconds with 0 out of 0 bytes received still starting... curl: (28) Operation timed out after 1001 milliseconds with 0 out of 0 bytes received still starting... hello from axum bash -c 0.14s user 0.07s system 2% cpu 8.421 total
$ fly ssh issue ? Select organization: Amos Wenger (personal) ? Email address for user to issue cert: [redacted]
!!!! WARNING: We're now prompting you to save an SSH private key and certificate !!!! !!!! (the private key in "id_whatever" and the certificate in "id_whatever-cert.pub"). !!!! !!!! These SSH credentials are time-limited and handling them in files is clunky; !!!! !!!! consider running an SSH agent and running this command with --agent. Things !!!! !!!! should just sort of work like magic if you do. !!!! ? Path to store private key: /tmp/id_rsa Wrote 24-hour SSH credential to /tmp/id_rsa, /tmp/id_rsa-cert.pub
Host hello-axum HostName localhost Port 2200 IdentityFile /tmp/id_rsa
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
[19:27:01.785] Remote server is listening on 43703 [19:27:01.785] Parsed server configuration: {"serverConfiguration":{"remoteListeningOn":{"port":43703},"osReleaseId":"debian","arch":"x86_64","webUiAccessToken":"","sshAuthSock":"","display":"","tmpDir":"/tmp","platform":"linux","connectionToken":"1a11a111-1111-111a-aaa1-a11a11111111"},"downloadTime":3407,"installTime":1447,"serverStartTime":99,"installUnpackCode":"success"} [19:27:01.786] Persisting server connection details to /Users/amos/Library/Application Support/Code/User/globalStorage/ms-vscode-remote.remote-ssh/vscode-ssh-host-9a297f3d-30d9c6cd9483b2cc586687151bcbcd635f373630-0.82.1/data.json [19:27:01.788] Starting forwarding server. localPort 54022 -> socksPort 54016 -> remotePort 43703 [19:27:01.788] Forwarding server listening on 54022 [19:27:01.788] Waiting for ssh tunnel to be ready [19:27:01.789] Tunneled 43703 to local port 54022 [19:27:01.789] Resolved "ssh-remote+hello-axum" to "127.0.0.1:54022" [19:27:01.790] [Forwarding server 54022] Got connection 0 [19:27:01.796] ------ [19:27:01.807] [Forwarding server 54022] Got connection 1 [19:27:01.809] Failed to set up socket for dynamic port forward to remote port 43703: connect ECONNREFUSED 127.0.0.1:54016. Is the remote port correct? [19:27:01.809] > local-server-1> ssh child died, shutting down [19:27:01.809] Failed to set up socket for dynamic port forward to remote port 43703: Socket closed. Is the remote port correct? [19:27:01.812] Local server exit: 0
$ fly machines run --port 80:8080/tcp:http --port 443:8080/tcp:http:tls --region cdg --size shared-cpu-1x hello-axum Searching for image 'hello-axum' locally... image found: sha256:3f93ceb9158f5e123253060d58d607f7c2a7e2f93797b49b4edbbbcc8e1b3840 ==> Pushing image to fly The push refers to repository [registry.fly.io/axum-machine] 02f75279051e: Layer already exists 4e38e245312b: Layer already exists 85ade8c6ca76: Layer already exists ad6562704f37: Layer already exists deployment-1655573668: digest: sha256:1ddfda6a6d8d84d804602653501db1c9720677b6e04e31008d3256c53ec09723 size: 1159 --> Pushing image done Image: registry.fly.io/axum-machine:deployment-1655573668 Image size: 152 MB Machine is launching... Success! A machine has been successfully launched, waiting for it to be started Machine ID: 217814d9c9ee89 Instance ID: 01G5VY2TKH0A1MQWSX05S1GPK8 State: starting Waiting on firecracker VM... Waiting on firecracker VM... Waiting on firecracker VM... Machine started, you can connect via the following private ip fdaa:0:446c:a7b:5b66:d530:1a4b:2
$ fly status App Name = axum-machine Owner = personal Version = 0 Status = pending Hostname = axum-machine.fly.dev
Machines ID NAME REGION STATE CREATED 217814d9c9ee89 ancient-snowflake-1933 cdg started 2022-06-18T17:34:30Z
这也有效,并显示我们的机器正在运行。棒!
我们还有:
1 2 3 4 5 6 7
$ fly m list 1 machines have been retrieved. View them in the UI here (https://fly.io/apps/axum-machine/machines/)
axum-machine ID IMAGE CREATED STATE REGION NAME IP ADDRESS 217814d9c9ee89 axum-machine:deployment-1655573668 2022-06-18T17:34:30Z started cdg ancient-snowflake-1933 fdaa:0:446c:a7b:5b66:d530:1a4b:2
..其中有更多细节。
我们的应用程序目前没有公共 IP,因此无法使用 curl 访问域。
但我们可以分配一个 - 我会选择 IPv6,因为我有它,而 IPv4 地址是一种宝贵的商品。
1 2 3 4
$ fly ips allocate-v6 TYPE ADDRESS REGION CREATED AT v6 2a09:8280:1::48d5 global 1s ago
$ fly m stop 217814d9c9ee89 217814d9c9ee89 has been successfully stopped $ fly m status 217814d9c9ee89 Success! A machine has been retrieved Machine ID: 217814d9c9ee89 Instance ID: 01G5VY2TKH0A1MQWSX05S1GPK8 State: stopped
Event Logs MACHINE STATUS EVENT TYPE SOURCE TIMESTAMP stopped exit flyd 2022-06-18T17:40:38.517Z stopping stop user 2022-06-18T17:40:35.245Z started start flyd 2022-06-18T17:34:41.353Z created launch user 2022-06-18T17:34:30.538Z
$ fly m status 217814d9c9ee89 Success! A machine has been retrieved Machine ID: 217814d9c9ee89 Instance ID: 01G5VY2TKH0A1MQWSX05S1GPK8 State: started
Event Logs MACHINE STATUS EVENT TYPE SOURCE TIMESTAMP started start flyd 2022-06-18T17:41:46.075Z starting start user 2022-06-18T17:41:45.695Z stopped exit flyd 2022-06-18T17:40:38.517Z stopping stop user 2022-06-18T17:40:35.245Z started start flyd 2022-06-18T17:34:41.353Z created launch user 2022-06-18T17:34:30.538Z
$ fly m remove --force 217814d9c9ee89 machine 217814d9c9ee89 was found and is currently in started state, attempting to destroy... 217814d9c9ee89 has been destroyed $ fly machines run --app axum-machine --port 22:22/tcp --region cdg --size shared-cpu-1x hello-axum Searching for image 'hello-axum' locally... image found: sha256:3f93ceb9158f5e123253060d58d607f7c2a7e2f93797b49b4edbbbcc8e1b3840 ==> Pushing image to fly The push refers to repository [registry.fly.io/axum-machine] 02f75279051e: Layer already exists 4e38e245312b: Layer already exists 85ade8c6ca76: Layer already exists ad6562704f37: Layer already exists deployment-1655574325: digest: sha256:1ddfda6a6d8d84d804602653501db1c9720677b6e04e31008d3256c53ec09723 size: 1159 --> Pushing image done Image: registry.fly.io/axum-machine:deployment-1655574325 Image size: 152 MB Machine is launching... Success! A machine has been successfully launched, waiting for it to be started Machine ID: 5918536ef46383 Instance ID: 01G5VYPX14END6ZPAHBB411304 State: starting Waiting on firecracker VM... Waiting on firecracker VM... Machine started, you can connect via the following private ip fdaa:0:446c:a7b:5adc:24:e81f:2
进而:
1 2 3 4 5 6 7 8 9 10 11 12
$ ssh -vvv -i /tmp/id_rsa [email protected] OpenSSH_8.2p1 Ubuntu-4ubuntu0.5, OpenSSL 1.1.1f 31 Mar 2020 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files debug1: /etc/ssh/ssh_config line 21: Applying options for * debug2: resolving "axum-machine.fly.dev" port 22 debug2: ssh_connect_direct debug1: Connecting to axum-machine.fly.dev [2a09:8280:1::48d5] port 22. debug1: Connection established. debug1: identity file /tmp/id_rsa type -1 debug1: identity file /tmp/id_rsa-cert type 7 debug1: Local version string SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.5
嗯,卡住了。
让我们检查应用程序日志…
1 2 3 4 5 6 7 8
$ 2022-06-18T17:47:43Z proxy[5918536ef46383] cdg [info]Machine not ready yet (11.072820024s since start requested) 2022-06-18T17:47:44Z proxy[5918536ef46383] cdg [info]Machine not ready yet (15.250221892s since start requested) 2022-06-18T17:47:45Z proxy[5918536ef46383] cdg [info]Machine not ready yet (33.956303928s since start requested) 2022-06-18T17:47:47Z proxy[5918536ef46383] cdg [info]Machine not ready yet (5.409191838s since start requested) 2022-06-18T17:47:48Z proxy[5918536ef46383] cdg [info]Machine not ready yet (10.043353267s since start requested) 2022-06-18T17:47:48Z proxy[5918536ef46383] cdg [info]Machine not ready yet (16.080325672s since start requested) 2022-06-18T17:47:50Z proxy[5918536ef46383] cdg [info]Machine not ready yet (38.962990983s since start requested) ^C%
letlistener = TcpListener::bind("[::]:2222").await.unwrap(); whileletOk((mut ingress, _)) = listener.accept().await { letnum_conns = num_conns.clone(); tokio::spawn(asyncmove { // We'll tell OpenSSH to listen on this IPv4 address. letmut egress = TcpStream::connect("127.0.0.2:22").await.unwrap(); // did you know: loopback is 127.0.0.1/8, it goes all the way to // 127.255.255.254 (and 127.255.255.255 for broadcast)
num_conns.fetch_add(1, Ordering::SeqCst);
match tokio::io::copy_bidirectional(&mut ingress, &mut egress).await { Ok((to_egress, to_ingress)) => { println!( "Connection ended gracefully ({to_egress} bytes from client, {to_ingress} bytes from server)" ); } Err(err) => { println!("Error while proxying: {}", err); } } num_conns.fetch_sub(1, Ordering::SeqCst); }); } }
# in `hello-axum/Dockerfile` # syntax = docker/dockerfile:1.4
################################################################################ # Let's just make our own Rust builder image based on ubuntu:20.04 to avoid # any libc version problems FROM ubuntu:20.04 AS builder
# Install base utils: curl to grab rustup, gcc + build-essential for linking. # we could probably reduce that a bit but /shrug RUNset -eux; \ export DEBIAN_FRONTEND=noninteractive; \ apt update; \ apt install --yes --no-install-recommends \ curl ca-certificates \ gcc build-essential \ ; \ apt clean autoclean; \ apt autoremove --yes; \ rm -rf /var/lib/{apt,dpkg,cache,log}/; \ echo"Installed base utils!"
# Add rustup to path, check that it works ENV PATH=${PATH}:/root/.cargo/bin RUNset -eux; \ rustup --version;
# Build some code! # Careful: now we need to cache `/root/.cargo/` rather than `/usr/local/cargo` # since rustup installed things differently than in the rust build image WORKDIR /app COPY . . RUN --mount=type=cache,target=/app/target \ --mount=type=cache,target=/root/.cargo/registry \ --mount=type=cache,target=/root/.cargo/git \ --mount=type=cache,target=/root/.rustup \ set -eux; \ rustup install stable; \ cargo build --release; \ objcopy --compress-debug-sections target/release/hello-axum ./hello-axum
################################################################################ FROM ubuntu:20.04
# Note that we've changed the `ListenAddress` here from `0.0.0.0` to # `127.0.0.2`. It's not really necessary but it's neat that 127.0.0.1 is a /8. RUNset -eux; \ echo"Port 22" >> /etc/ssh/sshd_config; \ echo"AddressFamily inet" >> /etc/ssh/sshd_config; \ echo"ListenAddress 127.0.0.2" >> /etc/ssh/sshd_config; \ echo"PasswordAuthentication no" >> /etc/ssh/sshd_config; \ echo"ClientAliveInterval 30" >> /etc/ssh/sshd_config; \ echo"ClientAliveCountMax 10" >> /etc/ssh/sshd_config; \ echo"SSH server set up"
USER amos
# Don't forget to change that if you don't want to give /me/ access to your # remote dev env! Otherwise I'll ssh in there and fix your code 😈 RUNset -eux; \ mkdir ~/.ssh; \ curl https://github.com/fasterthanlime.keys | tee -a ~/.ssh/authorized_keys
WORKDIR app
COPY --from=builder /app/hello-axum ./hello-axum
# Because our top-level process starts the ssh daemon itself, for simplicity, # let's run it as root. It could drop privileges after that but we already have # passwordless sudo set up on the machine so double-shrug. USER root CMD ["./hello-axum"]
use std::{ process::Stdio, rc::Rc, sync::atomic::{AtomicU64, Ordering}, time::{Duration, Instant}, };
// we can still use regular tokio stuff! use tokio::{process::Command, time::sleep};
// but we want the uring versions of TCP sockets. use tokio_uring::{ buf::IoBuf, net::{TcpListener, TcpStream}, };
// can't use a regular main function because we need to start a // `tokio-uring` runtime, which manages both the main tokio runtime // and the uring runtime. fnmain() { // nobody's stopping us from defining our own main function though. tokio_uring::start(main_inner()); }
asyncfnmain_inner() { // this is regular tokio stuff, still works fine. letstatus = Command::new("service") .arg("ssh") .arg("start") .stdin(Stdio::null()) .stdout(Stdio::inherit()) .stderr(Stdio::inherit()) .status() .await .unwrap(); assert!(status.success());
letnum_conns: Rc<AtomicU64> = Default::default();
// We can still spawn stuff, but with tokio_uring's `spawn`. The future // we send doesn't have to be `Send`, since it's all single-threaded. tokio_uring::spawn({ letnum_conns = num_conns.clone(); letmut last_activity = Instant::now();
// tokio-uring's TcpListener wants a `SocketAddr`, not a `ToAddrs` or // something, so let's parse it ahead of time. letaddr = "[::]:2222".parse().unwrap();
// also it doesn't return a future? letlistener = TcpListener::bind(addr).unwrap(); whileletOk((ingress, _)) = listener.accept().await { println!("Accepted connection");
letnum_conns = num_conns.clone(); tokio_uring::spawn(asyncmove { // same deal, we need to parse first. if you're puzzled why there's // no mention of `SocketAddr` anywhere, it's inferred from what // `TcpStream::connect` wants. letegress_addr = "127.0.0.2:22".parse().unwrap(); letegress = TcpStream::connect(egress_addr).await.unwrap();
num_conns.fetch_add(1, Ordering::SeqCst);
// `read` and `write` take owned buffers (more on that later), and // there's no "per-socket" buffer, so they actually take `&self`. // which means we don't need to split them into a read half and a // write half like we'd normally do with "regular tokio". Instead, // we can send a reference-counted version of it. also, since a // tokio-uring runtime is single-threaded, we can use `Rc` instead of // `Arc`. letegress = Rc::new(egress); letingress = Rc::new(ingress);
// We need to copy in both directions... letmut from_ingress = tokio_uring::spawn(copy(ingress.clone(), egress.clone())); letmut from_egress = tokio_uring::spawn(copy(egress.clone(), ingress.clone()));
// Stop as soon as one of them errors letres = tokio::try_join!(&mut from_ingress, &mut from_egress); ifletErr(e) = res { println!("Connection error: {}", e); } // Make sure the reference count drops to zero and the socket is // freed by aborting both tasks (which both hold a `Rc<TcpStream>` // for each direction) from_ingress.abort(); from_egress.abort();
num_conns.fetch_sub(1, Ordering::SeqCst); }); } }
asyncfncopy(from: Rc<TcpStream>, to: Rc<TcpStream>) ->Result<(), std::io::Error> { letmut buf = vec![0u8; 1024]; loop { // things look weird: we pass ownership of the buffer to `read`, and we get // it back, _even if there was an error_. There's a whole trait for that, // which `Vec<u8>` implements! let (res, buf_read) = from.read(buf).await; // Propagate errors, see how many bytes we read letn = res?; if n == 0 { // A read of size zero signals EOF (end of file), finish gracefully returnOk(()); }
// The `slice` method here is implemented in an extension trait: it // returns an owned slice of our `Vec<u8>`, which we later turn back // into the full `Vec<u8>` let (res, buf_write) = to.write(buf_read.slice(..n)).await; res?;
// Later is now, we want our full buffer back. // That's why we declared our binding `mut` way back at the start of `copy`, // even though we moved it into the very first `TcpStream::read` call. buf = buf_write.into_inner(); } }
docker build, fly m remove –force, fly m run …. 它有效!
// We won't have an allocator, so we can't bring the Rust standard library // with us here. Besides, it probably wouldn't pass the BPF verifier. #![no_std] #![no_main] use aya_bpf::{macros::sock_ops, programs::SockOpsContext}; // This works a little like `tracing`! use aya_log_ebpf::info;
// The proc macro here does the heavy lifting. There's a bunch of linker fuckery // at hand here that would be fascinating, but that I won't get into. #[sock_ops(name = "flyremote")] pubfnflyremote(ctx: SockOpsContext) ->u32 { matchunsafe { try_flyremote(ctx) } { Ok(ret) => ret, Err(ret) => ret, } }
// This gets called for every "socket operation" event. unsafefntry_flyremote(ctx: SockOpsContext) ->Result<u32, u32> { // transmuting from a `u32` to a `[u8; 4]` - should be okay. letlocal_ip4: [u8; 4] = core::mem::transmute([ctx.local_ip4()]); letremote_ip4: [u8; 4] = core::mem::transmute([ctx.remote_ip4()]);
// log some stuff info!( &ctx, "op ({} {}), local port {}, remote port {}, local ip4 = {}.{}.{}.{} remote ip4 = {}.{}.{}.{}", op_name(ctx.op()), ctx.op(), ctx.local_port(), // this value is big-endian (but local_port is native-endian) u32::from_be(ctx.remote_port()), local_ip4[0], local_ip4[1], local_ip4[2], local_ip4[3], remote_ip4[0], remote_ip4[1], remote_ip4[2], remote_ip4[3], );
// that's `BPF_SOCK_OPS_STATE_CB_FLAG` - so we receive "state_cb" events, // when a socket changes state. // this may fail, so it returns a `Result`, but I wouldn't know what to do // if it failed anyway. let_ = ctx.set_cb_flags(1 << 2);
// if this is a "state_cb" event, show the old state and new state, which // are the first two arguments (we have up to 4 arguments) if ctx.op() == 10 { info!( &ctx, "state transition: {} {} => {} {}", ctx.arg(0), state_name(ctx.arg(0)), ctx.arg(1), state_name(ctx.arg(1)), ); }
target/bpfel-unknown-none/release/flyremote-bpf: file format elf64-bpf
SYMBOL TABLE: 0000000000000000 l df *ABS* 0000000000000000 flyremote_bpf-8df4772bd494bad9 0000000000001890 l sockops/flyremote 0000000000000000 LBB0_30 (cut) 0000000000000000 g F sockops/flyremote 0000000000002a48 flyremote 0000000000000000 g O maps 000000000000001c AYA_LOG_BUF 0000000000000040 g F .text 0000000000000058 .hidden memcpy 000000000000001c g O maps 000000000000001c AYA_LOGS 0000000000000000 g F .text 0000000000000040 .hidden memset
要将其加载到内核中,我们需要一个常规的 Linux 可执行文件。对我们来说它将是hello-axum(现在真的很后悔这个名字,它不再是 axum 驱动的了)。
我们需要这些依赖项:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
# in `hello-axum/Cargo.toml`
[package] name = "hello-axum" version = "0.1.0" edition = "2021"
[dependencies] aya = { version = ">=0.11", features=["async_tokio"] } aya-log = "0.1" clap = { version = "3.1", features = ["derive"] } color-eyre = "0.6.1" log = "0.4" simplelog = "0.12" tokio = { version = "1.19.2", features = ["full"] }
use aya::programs::SockOps; use aya::{include_bytes_aligned, Bpf}; use aya_log::BpfLogger; use clap::Parser; use log::info; use simplelog::{ColorChoice, ConfigBuilder, LevelFilter, TermLogger, TerminalMode}; use tokio::signal;
09:38:52 [INFO] flyremote_bpf: [src/main.rs:26] op (tcp_connect_cb 3), local port 59920, remote port 22, local ip4 = 127.0.0.1 remote ip4 = 127.0.0.2 09:38:52 [INFO] flyremote_bpf: [src/main.rs:26] op (rwnd_init 2), local port 59920, remote port 22, local ip4 = 127.0.0.1 remote ip4 = 127.0.0.2 09:38:52 [INFO] flyremote_bpf: [src/main.rs:26] op (timeout_init 1), local port 59920, remote port 22, local ip4 = 127.0.0.1 remote ip4 = 127.0.0.2 09:38:52 [INFO] flyremote_bpf: [src/main.rs:26] op (needs_ecn 6), local port 59920, remote port 22, local ip4 = 127.0.0.1 remote ip4 = 127.0.0.2 09:38:52 [INFO] flyremote_bpf: [src/main.rs:26] op (rwnd_init 2), local port 22, remote port 59920, local ip4 = 127.0.0.2 remote ip4 = 127.0.0.1 09:38:52 [INFO] flyremote_bpf: [src/main.rs:26] op (timeout_init 1), local port 22, remote port 59920, local ip4 = 127.0.0.2 remote ip4 = 127.0.0.1 09:38:52 [INFO] flyremote_bpf: [src/main.rs:26] op (needs_ecn 6), local port 22, remote port 59920, local ip4 = 127.0.0.2 remote ip4 = 127.0.0.1 09:38:52 [INFO] flyremote_bpf: [src/main.rs:26] op (state_cb 10), local port 59920, remote port 22, local ip4 = 127.0.0.1 remote ip4 = 127.0.0.2 09:38:52 [INFO] flyremote_bpf: [src/main.rs:52] state transition: 2 syn-sent => 1 established 09:38:52 [INFO] flyremote_bpf: [src/main.rs:26] op (active_established_cb 4), local port 59920, remote port 22, local ip4 = 127.0.0.1 remote ip4 = 127.0.0.2 09:38:52 [INFO] flyremote_bpf: [src/main.rs:26] op (passive_established_cb 5), local port 22, remote port 59920, local ip4 = 127.0.0.2 remote ip4 = 127.0.0.1 09:38:52 [INFO] flyremote_bpf: [src/main.rs:26] op (state_cb 10), local port 22, remote port 59920, local ip4 = 127.0.0.2 remote ip4 = 127.0.0.1 09:38:52 [INFO] flyremote_bpf: [src/main.rs:52] state transition: 3 syn-recv => 1 established
09:38:56 [INFO] flyremote_bpf: [src/main.rs:26] op (state_cb 10), local port 59920, remote port 22, local ip4 = 127.0.0.1 remote ip4 = 127.0.0.2 09:38:56 [INFO] flyremote_bpf: [src/main.rs:52] state transition: 1 established => 4 fin-wait1 09:38:56 [INFO] flyremote_bpf: [src/main.rs:26] op (state_cb 10), local port 22, remote port 59920, local ip4 = 127.0.0.2 remote ip4 = 127.0.0.1 09:38:56 [INFO] flyremote_bpf: [src/main.rs:52] state transition: 1 established => 8 close-wait 09:38:56 [INFO] flyremote_bpf: [src/main.rs:26] op (state_cb 10), local port 59920, remote port 22, local ip4 = 127.0.0.1 remote ip4 = 127.0.0.2 09:38:56 [INFO] flyremote_bpf: [src/main.rs:52] state transition: 4 fin-wait1 => 5 fin-wait2 09:38:56 [INFO] flyremote_bpf: [src/main.rs:26] op (state_cb 10), local port 59920, remote port 22, local ip4 = 127.0.0.1 remote ip4 = 127.0.0.2 09:38:56 [INFO] flyremote_bpf: [src/main.rs:52] state transition: 5 fin-wait2 => 7 close 09:38:56 [INFO] flyremote_bpf: [src/main.rs:26] op (state_cb 10), local port 22, remote port 59920, local ip4 = 127.0.0.2 remote ip4 = 127.0.0.1 09:38:56 [INFO] flyremote_bpf: [src/main.rs:52] state transition: 8 close-wait => 9 last-ack 09:38:56 [INFO] flyremote_bpf: [src/main.rs:26] op (state_cb 10), local port 22, remote port 59920, local ip4 = 127.0.0.2 remote ip4 = 127.0.0.1 09:38:56 [INFO] flyremote_bpf: [src/main.rs:52] state transition: 9 last-ack => 7 close
implBpfLogger { /// Starts reading log records created with `aya-log-ebpf` and logs them /// with the default logger. See [log::logger]. pubfninit(bpf: &mut Bpf) ->Result<BpfLogger, Error> { BpfLogger::init_with_logger(bpf, DefaultLogger {}) } /// Starts reading log records created with `aya-log-ebpf` and logs them /// with the given logger. pubfninit_with_logger<T: Log + 'static>( bpf: &mut Bpf, logger: T, ) ->Result<BpfLogger, Error> { letlogger = Arc::new(logger); letmut logs: AsyncPerfEventArray<_> = bpf.map_mut("AYA_LOGS")?.try_into()?;
#![no_std] #![no_main] use aya_bpf::{ macros::{map, sock_ops}, maps::PerfEventArray, programs::SockOpsContext, }; use aya_log_ebpf::info;
// This is what we'll send over our "perf event array" #[repr(C)] pubstructConnectionEvent { // 1 = connected, 2 = disconnected pub action: u32, }
// We could probably make a Rust enum work here, but I don't feel like fighting // the verifier too much today. const ACTION_CONNECTED: u32 = 1; const ACTION_DISCONNECTED: u32 = 2;
// Just like aya-log does, but this only has events we care about #[map(name = "EVENTS")] staticmut EVENTS: PerfEventArray<ConnectionEvent> = PerfEventArray::<ConnectionEvent>::with_max_entries(1024, 0);
unsafefntry_flyremote(ctx: SockOpsContext) ->Result<u32, u32> { if ctx.local_port() != 22 { // don't care if it's not SSH-server-relevant returnOk(0); }
use aya::{include_bytes_aligned, util::online_cpus, Bpf}; use aya::{maps::perf::AsyncPerfEventArray, programs::SockOps}; use aya_log::BpfLogger; use bytes::BytesMut; use tokio::{signal, time::sleep};
// This is what we'll receive over our "perf event array". We'd normally // have a "common" crate we pull from both the bpf-nostd world and the // userspace-yesstd world, but for this example we're just copying it wholesale. #[repr(C)] #[derive(Clone, Copy)] pubstructConnectionEvent { // 1 = connected, 2 = disconnected pub action: u32, }
$ cargo build --quiet && sudo ./target/debug/hello-axum Idle for 19.527µs Waiting for Ctrl-C... (in another terminal: ssh 127.0.0.2) Connection accepted! (in another terminal: Ctrl-D to close out of SSH) Connection closed! Idle for 5.001708865s Idle for 10.003602174s Idle for 15.004068679s Idle for 20.005524839s Idle for 25.006052848s Idle for 30.007529878s Idle for 35.008838041s Idle for 40.010259957s Idle for 45.011105232s Idle for 50.012581951s Idle for 55.013017848s Idle for 60.01454433s Stopping machine. Goodbye!