git11 is an AI workspace for GitHub engineering teams
Hey HN, I built git11 - an AI workspace for engineering teams that connects to your GitHub org. What it does: - Auto-generates structured documentation for any repository - Lets you ask natural language questions about your codebase ("how does auth work?", "where is payment logic?") - Manages org-wide access with role permissions and audit logs - Official GitHub App integration, read-only by default I built this because I kept seeing teams with zero docs and developers wasting hours just navigating unfamiliar codebases. Tech stack: React, Supabase, GitHub API, Resend e-mail, Google auth, Would love feedback - especially from anyone who's dealt with documentation debt on a growing codebase. https://git11.xyz Comments URL: https://news.ycombinator.com/item?id=47611889 Points: 1 # Comments: 0
Could not retrieve the full article text.
Read on Hacker News AI Top →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
integrationgithubrepository[P] Trained a small BERT on 276K Kubernetes YAMLs using tree positional encoding instead of sequential
I trained a BERT-style transformer on 276K Kubernetes YAML files, replacing standard positional encoding with learned tree coordinates (depth, sibling index, node type). The model uses hybrid bigram/trigram prediction targets to learn both universal structure and kind-specific patterns — 93/93 capability tests passing. Interesting findings: learned depth embeddings are nearly orthogonal (categorical, not smooth like sine/cosine), and 28/48 attention heads specialize on same-depth attention (up to 14.5x bias). GitHub: https://github.com/vimalk78/yaml-bert submitted by /u/vimalk78 [link] [comments]
Rusty Flying Robots: Learning a Full Robotics Stack with Real-Time Operation on an STM32 Microcontroller in a 9 ECTS MS Course
arXiv:2604.00032v1 Announce Type: cross Abstract: We describe a novel masters-level projects class that teaches robotics along the traditional robotics pipeline (dynamics, state estimation, controls, planning). One key motivational part is that students have to directly apply the algorithms they learn on a highly constrained compute platform, effectively making a robot fly. We teach nonlinear algorithms as deployed in state-of-the-art flight stacks such as PX4. Didactically, we rely on two core concepts: 1) avoidance of provided black-box software infrastructure, and 2) usage of the safe and efficient programming language Rust that is used on the PC (for simulation) and an STM32 microcontroller (for robot deployment). We discuss our methodology and the student feedback over two years with
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products
b8631
sync : ggml macOS/iOS: macOS Apple Silicon (arm64) macOS Intel (x64) iOS XCFramework Linux: Ubuntu x64 (CPU) Ubuntu arm64 (CPU) Ubuntu s390x (CPU) Ubuntu x64 (Vulkan) Ubuntu arm64 (Vulkan) Ubuntu x64 (ROCm 7.2) Ubuntu x64 (OpenVINO) Windows: Windows x64 (CPU) Windows arm64 (CPU) Windows x64 (CUDA 12) - CUDA 12.4 DLLs Windows x64 (CUDA 13) - CUDA 13.1 DLLs Windows x64 (Vulkan) Windows x64 (SYCL) Windows x64 (HIP) openEuler: openEuler x86 (310p) openEuler x86 (910b, ACL Graph) openEuler aarch64 (310p) openEuler aarch64 (910b, ACL Graph)
b8634
chat : add Granite 4.0 chat template with correct tool_call role mapping ( #20804 ) chat : add Granite 4.0 chat template with correct tool_call role mapping Introduce LLM_CHAT_TEMPLATE_GRANITE_4_0 alongside the existing Granite 3.x template (renamed LLM_CHAT_TEMPLATE_GRANITE_3_X ). The Granite 4.0 Jinja template uses XML tags and maps the assistant_tool_call role to assistant . Without a matching C++ handler, the fallback path emits the literal role assistant_tool_call which the model does not recognize, breaking tool calling when --jinja is not used. Changes: Rename LLM_CHAT_TEMPLATE_GRANITE to LLM_CHAT_TEMPLATE_GRANITE_3_X (preserves existing 3.x behavior unchanged) Add LLM_CHAT_TEMPLATE_GRANITE_4_0 enum, map entry, and handler Detection: + ( or ) → 4.0, otherwise → 3.x Add production Gr
b8635
Relax prefill parser to allow space. ( #21240 ) Relax prefill parser to allow space. Move changes from prefix() to parser generation Only allow spaces if we're not having a pure content parser next macOS/iOS: macOS Apple Silicon (arm64) macOS Intel (x64) iOS XCFramework Linux: Ubuntu x64 (CPU) Ubuntu arm64 (CPU) Ubuntu s390x (CPU) Ubuntu x64 (Vulkan) Ubuntu arm64 (Vulkan) Ubuntu x64 (ROCm 7.2) Ubuntu x64 (OpenVINO) Windows: Windows x64 (CPU) Windows arm64 (CPU) Windows x64 (CUDA 12) - CUDA 12.4 DLLs Windows x64 (CUDA 13) - CUDA 13.1 DLLs Windows x64 (Vulkan) Windows x64 (SYCL) Windows x64 (HIP) openEuler: openEuler x86 (310p) openEuler x86 (910b, ACL Graph) openEuler aarch64 (310p) openEuler aarch64 (910b, ACL Graph)
Geometric Visual Servo Via Optimal Transport
arXiv:2506.02768v2 Announce Type: replace Abstract: When developing control laws for robotic systems, the principle factor when examining their performance is choosing inputs that allow smooth tracking to a reference input. In the context of robotic manipulation, this involves translating an object or end-effector from an initial pose to a target pose. Robotic manipulation control laws frequently use vision systems as an error generator to track features and produce control inputs. However, current control algorithms don't take into account the probabilistic features that are extracted and instead rely on hand-tuned feature extraction methods. Furthermore, the target features can exist in a static pose thus allowing a combined pose and feature error for control generation. We present a geo

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!