[New Model] - CatGen v2 - generate 128px images of cats with this GAN
Hey, r/LocalLLaMA ! I am back with a new model - no transformer but a GAN! It is called CatGen v2 and it generates 128x128px of cats. You can find the full source code, samples and the final model here: https://huggingface.co/LH-Tech-AI/CatGen-v2 Look at this sample after epoch 165 (trained on a single Kaggle T4 GPU): https://preview.redd.it/t1k3v71auqsg1.png?width=1146 format=png auto=webp s=26b4639eb7f9635d8b58a24633f8e4125859fd9e Feedback is very welcome :D submitted by /u/LH-Tech_AI [link] [comments]
Could not retrieve the full article text.
Read on Reddit r/LocalLLaMA →Reddit r/LocalLLaMA
https://www.reddit.com/r/LocalLLaMA/comments/1sacbi6/new_model_catgen_v2_generate_128px_images_of_cats/Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
llamamodeltransformer
Steering Might Stop Working Soon
Steering LLMs with single-vector methods might break down soon, and by soon I mean soon enough that if you're working on steering, you should start planning for it failing now . This is particularly important for things like steering as a mitigation against eval-awareness. Steering Humans I have a strong intuition that we will not be able to steer a superintelligence very effectively, partially for the same reason that you probably can't steer a human very effectively. I think weakly "steering" a human looks a lot like an intrusive thought. People with weaker intrusive thoughts usually find them unpleasant, but generally don't act on them ! On the other hand, strong "steering" of a human probably looks like OCD, or a schizophrenic delusion. These things typically cause enormous distress, a

I Made Parseltongue
Yes, that one from HPMoR by @Eliezer Yudkowsky . And I mean it absolutely literally - this is a language designed to make lies inexpressible. It catches LLMs' ungrounded statements, incoherent logic and hallucinations. Comes with notebooks (Jupyter-style), server for use with agents, and inspection tooling. Github , Documentation . Works everywhere - even in the web Claude with the code execution sandbox. How Unsophisticated lies and manipulations are typically ungrounded or include logical inconsistencies. Coherent, factually grounded deception is a problem whose complexity grows exponentially - and our AI is far from solving such tasks. There will still be a theoretical possibility to do it - especially under incomplete information - and we have a guarantee that there is no full computat
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Steering Might Stop Working Soon
Steering LLMs with single-vector methods might break down soon, and by soon I mean soon enough that if you're working on steering, you should start planning for it failing now . This is particularly important for things like steering as a mitigation against eval-awareness. Steering Humans I have a strong intuition that we will not be able to steer a superintelligence very effectively, partially for the same reason that you probably can't steer a human very effectively. I think weakly "steering" a human looks a lot like an intrusive thought. People with weaker intrusive thoughts usually find them unpleasant, but generally don't act on them ! On the other hand, strong "steering" of a human probably looks like OCD, or a schizophrenic delusion. These things typically cause enormous distress, a

I Made Parseltongue
Yes, that one from HPMoR by @Eliezer Yudkowsky . And I mean it absolutely literally - this is a language designed to make lies inexpressible. It catches LLMs' ungrounded statements, incoherent logic and hallucinations. Comes with notebooks (Jupyter-style), server for use with agents, and inspection tooling. Github , Documentation . Works everywhere - even in the web Claude with the code execution sandbox. How Unsophisticated lies and manipulations are typically ungrounded or include logical inconsistencies. Coherent, factually grounded deception is a problem whose complexity grows exponentially - and our AI is far from solving such tasks. There will still be a theoretical possibility to do it - especially under incomplete information - and we have a guarantee that there is no full computat

![[New Model] - CatGen v2 - generate 128px images of cats with this GAN](https://external-preview.redd.it/fC2fRSP_OWy5RuDbLNvQAg0sAWBf_cH5RSNIELsurYY.png?width=140&height=75&auto=webp&s=b701a0c0e4a43529d64b0d532d2ac0a8e61f3404)

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!