Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Those with success using GPT-4 for programming – what are you doing?
92 points by ablyveiled 12 months ago | hide | past | favorite | 106 comments
Personally, GPT-4 has wasted about as much of my time as it's saved with its constant hallucinations and lack of insight when writing NixOS derivations and a Rust web backend. I wouldn't let it near my delicate-hackish checkpointing work in C++.

What are you doing where it serves you well?




SQL queries. I am poor at writing SQL queries with bunches of joins. I just show it the table definitions and tell it what I want. Sometimes it doesn't get it quite right (which is why you need some knowledge of what you are asking of it) and I point it out and it fixes it.

Regular Expressions. I hate doing regexps. It is excellent at them.

Wiki articles for an encyclopedia I'm developing. It is great at this, but occasionally I catch it hallucinating articles out of whole cloth because it has access to zero information about what I asked it about, so it just goes from the article title and imagines what it must be about.

I know where to use it right, and I've found my output has not only doubled since I started using it, I am enjoying coding even more than ever because it has got rid of the worst drudgery that would cause me to switch from my IDE to my browser and bring up HN to avoid working.


I've also been using ChatGPT to create SQL queries -- it's genuinely really, really great at this. I do have to say "I'm using PostgreSQL with DataGrip IDE" and feed it back any errors I get, because there are a lot of permutations for SQL, but it's a pretty seamless experience overall.

Here's an example of one that helped me a lot. This takes no time at all to get ChatGPT to generate by giving it anonymized snippets of table data saying:

"here's an example of the relevant tables, I want output that looks like this, what query does that?"

and it just works instantly.

ChatGPT did NOT format it this way, I changed the line breaks and indentation formatting to better fit viewing in Chrome/Safari. I also renamed tables/columns out of personal responsibility, to generic names which make it all the more confusing.

  -- Existing Records
  WITH search_table_A AS (
    SELECT * FROM tableA
    WHERE some_primary_key = :searchKey
  ),
  search_table_B AS (
    SELECT * FROM tableB
    WHERE some_primary_key = :searchKey
  ),
  combined_query AS (
    SELECT COALESCE(search_table_A.our_UUID, search_table_B.our_UUID) AS our_UUID
    FROM search_table_A
    LEFT JOIN search_table_B ON search_table_A.some_primary_key = search_table_B.some_primary_key
  )
  SELECT tableC.* 
  FROM tableC
  JOIN combined_query ON tableC.our_UUID = combined_query.our_UUID;


Thirded, my only complaint is that it helpfully strips out comments and occasionally forgets which SQL variant I’m using. I have found it helpful to feed it back my query after I’ve implemented the recommended changes and made my tweaks, along with something like, “this is what I did, can you help me improve the section starting with `SELECT * FROM product`… the context seems to improve it’s subsequent recommendations.


Maybe you're using the paid version, or I suck too much at prompting, but so far I wasn't able to rely on the generated SQL and not for lack of trying. Workable queries, yes, but never doing what I intended...


I only just got access to the Plus version. But for the last month or so I've been using the free version with absolutely no problems in the scripts it generates.


I use ChatGPT-4 for $20/month. It pays for itself many times over, it's not really a hard decision at all! And yes, GPT-4 is much better at coding.


I’m curious what it saves you. I’ve done the same for a couple months, but found its results too unreliable to be of any use to me, in terms of time savings.


It's also great at fixing SQL queries that the database errors on.

So now, rather than take time to get complex queries right, I quickly sketch a query that's roughly what I want, and tell ChatGPT to "Fix this Sqlite query."


>SQL queries

Short, functional things like this I've found to really be ChatGPT's strength. For example, I had some godawful nested PHP code that had been written years ago I was trying to muck with. Asking GPT to sort it out for me took me way less time than trying to figure it out myself.

I've also used it for SQL queries and things like list comprehension, etc.

The few times I've pasted in whole code blocks and asked it to do XYZ, I've ended up basically debugging my prompt instead of my code.


Coding with LLMs was easier after understanding their limitations.

1. They don't know what they are saying until they have said it.

2. Your inputs and its outputs help make the next message.

3. LLMs are not suited for information retrieval like databases and search engines.

LLMs excel at reasoning and predicting subsequent text based on given context. Their strength lies in their ability to generate relevant and cohesive responses.

To optimize results, outline clear rules, strategies, or ideas for the LLM to follow. This helps the model craft, revise, or build upon the established context.

Starting with a precise query and introducing rules or constraints incrementally can help steer the model's output in the desired direction.

Avoid zero-shot queries as these can lead to the model generating unexpected or unrelated responses.

Be cautious while seeking pre-calculated or non-derived answers. Some instruction-tuned models might output incorrect solutions, as they are trained to respond to certain queries without proper context or information.

also, this is my biggest gripe no fault of ours of course: don't seek pre-calculated or non-derived answers. I've seen some of the demonstration data that people are using to train instruction-tuned models and are being taught to respond by making up answers to solutions it shouldn't try to compute. Btw, the output is wrong.

{ "instruction": "What would be the output of the following JavaScript snippet?", "input": "let area = 6 * 5;\nlet radius = area / 3.14;", "output": "The output of the JavaScript snippet is the radius, which is 1.91." },

https://github.com/sahil280114/codealpaca/commit/0d265112c70...


I use it to brainstorm and prototype approaches to a problem. First, I'll ask it to give me an overview of the problem domain; this gives the LLM context. Then, I describe the problem and ask it to generate solutions, along with pros/cons of each approach. This is iterative: you might ask it questions, modify its suggestions, periodically summarize. After that, you can either ask it to give you code for a prototype or build it yourself.

These models are good for ideation, scaffolding, and prototypes. It's currently clumsy to fully build an app with an LLM, but they are quite useful for certain tasks.


What sort of programs or systems are you creating?


Mostly machine learning pipelines, small react sites, and python CLIs. I've also used this framework for planning a schedule for my hobby project, getting advice for social predicaments, and optimizing the location of desk fans in my bedroom.


GPT-4 is amazing for me for writing Python, Go, and Kubernetes YAML.

I have designed and implemented 8 operators with between 90-100% of the code auto generated.

I use it to generate mermaid diagrams that implement the first 3 layers of the C4 model, sometimes needing some editing or guidance to modify it, then have it generate the code.

I generate diagrams with high temperature, and code with low.

That’s my experience. I have a coworker using ChatGPT Plus (gpt-4), and they fail to get anything working. It’s not zero effort, but how I think generally aligns with the model I think.

I love having a partner to talk to about thoughts and design ideas while on walks using the ChatGPT app w/ transcription.

I have never felt more productive.


I think I fall into the 2nd category, like your co-worker. What would be your tips to actually get anything useful out of it?


I use it as a personal tutor. It is really good at it, because it can respond to clarifying follow up questions and it is usually right. For pure development or peer review, it is not there yet. I asked it recently to help me decipher some real life SQL performance issues and it did not really help. Actually, it turned out to be a waste of time. The classic Postgres query analyzer and some 3rd party query visualization tools got the job done.



I hadn't even considered that facet! I probably bother my coworkers more than I should with what amounts to rubberducking.


https://RTEdge.net was polished up in no time, thanks to its vast knowledge. If you're knowledgeable in the field you ask questions about, it becomes a pure time save, as you can either just tell or test it.

It's general world knowledge is also vastly impressive though.

It can draw a 2D representation of a keyboard key

  _____________
  |           |
  | Keycaps   |
  |___________|
  |           |
  | Key Switch|
  |___________|
  |           |
  | Circuit   |
  | Board     |
  |___________|
  |           |
  | Base      |
  |___________|
explain what happens at each layer when I press the key but more impressively also what happens if I spill cola over it:

  _____________
  |           |
  | Keycaps   | --> Coca-Cola makes the keys sticky
  |___________|
  |           |
  | Key Switch| --> The switch could become sticky, preventing it from registering key presses properly
  |___________|
  |           |
  | Circuit   | --> The liquid could cause a short circuit, damaging the keyboard 
  | Board     |
  |___________|
  |           |
  | Base      | --> Coca-Cola pools at the bottom, potentially leaking into the circuit board from below
  |___________|


What is more impressive is that its knowledge spans a vast range of domains!

Ask about "layers of hierarchy at a high school":

  ::::::::::: School Board --> Sets policies, goals, and oversees school administration
  ___________ Principal --> Oversees the daily operations of the school, and implements policies and curriculum
  %%%%%%%%%%% Teachers --> Develop lesson plans, instruct students, and evaluate student performance
  ~~~~~~~~~~~ Support Staff --> Includes roles such as school nurses, librarians, and custodial staff
  +++++++++++ Students --> The primary recipients of education in the school

Go on with: "Say a student intentionally hit another, causing injury. Which layers activate?"

  ::::::::::: School Board --> Might be involved if the incident triggers policy reviews or disciplinary hearings
  ___________ Principal --> Would likely oversee the response, including disciplinary actions and communication with parents
  %%%%%%%%%%% Teachers --> Depending on the situation, they might provide first aid, supervise students, or report the incident
  ~~~~~~~~~~~ Support Staff --> Nurses would provide medical aid. Custodial staff may need to clean up. Security may investigate.
  +++++++++++ Students --> The perpetrator may face disciplinary action. The victim receives aid and support. Witnesses may give statements.

Add Korean drama: "The perpetrator is very rich and her parents finance the entire school. How could this change the picture?"

  ::::::::::: School Board --> May feel pressure due to financing, could potentially sway decisions or policies
  ___________ Principal --> Might face pressure to minimize the incident, may fear job security if they handle the situation 'wrong'
  %%%%%%%%%%% Teachers --> May feel divided between student safety and job security, could be hesitant to speak out
  ~~~~~~~~~~~ Support Staff --> Similar to teachers, may feel uneasy about reporting the incident or discussing it
  +++++++++++ Students --> Perpetrator may face less severe consequences; victim and witnesses might feel their voices aren't heard

Calm it down: "Say, a student runs out of space in her notebook. Which layers activate?"

  ::::::::::: School Board --> Might not get involved unless the issue is widespread and indicative of a systemic problem
  ___________ Principal --> Usually not directly involved unless there's a recurring issue of school supplies availability
  %%%%%%%%%%% Teachers --> Directly involved, may suggest using additional paper, starting a new notebook, or condensing notes
  ~~~~~~~~~~~ Support Staff --> May offer technical solutions if there's a possibility to switch to digital note-taking
  +++++++++++ Students --> Directly affected, student needs to decide how to continue taking notes, may need to purchase a new notebook

Spice it up!: "A female student dresses very scantily."

  ::::::::::: School Board --> May get involved if the dress code is violated and it becomes a widespread issue or stirs controversy
  ___________ Principal --> Likely to get involved if there are repeated violations, may meet with student or parents
  %%%%%%%%%%% Teachers --> Might report the student if dress code is violated or it disrupts the class; may also address it directly with the student
  ~~~~~~~~~~~ Support Staff --> Might not get involved unless asked to assist with enforcement or if they are also a student advisor
  +++++++++++ Students --> Classmates might react positively or negatively; student in question may face peer feedback or pressure

Add more drama: "An inspector from the ministry of Education visits too see how lessons are performed and during class, asks questions to the students on various topics and notices abysmal performance in all subjects but literature. What repercussions would this have at each layer?"

  ||||||||||| Ministry of Education --> Will likely request a formal review or assessment of the school's curriculum and teaching methods, and may require an action plan for improvement
  ::::::::::: School Board --> Could face scrutiny or pressure from the Ministry of Education, and would likely oversee the development and implementation of improvement plans
  ___________ Principal --> Would be responsible for implementing changes and managing teacher performance, may need to revise curriculum or teaching methods
  %%%%%%%%%%% Teachers --> Would face increased scrutiny and pressure, may need to adjust their teaching methods, might receive additional training or professional development
  ~~~~~~~~~~~ Support Staff --> May not be directly involved, but could face additional work related to implementing changes (such as increased administrative tasks or more student support)
  +++++++++++ Students --> Might face more rigorous coursework or increased homework, may also receive additional support or tutoring

Add parents!: "In this hierarchy, where are the students' parents?"

  ||||||||||| Ministry of Education --> Will likely request a formal review or assessment of the school's curriculum and teaching methods, and may require an action plan for improvement
  ::::::::::: School Board --> Could face scrutiny or pressure from the Ministry of Education, and would likely oversee the development and implementation of improvement plans
  ___________ Principal --> Would be responsible for implementing changes and managing teacher performance, may need to revise curriculum or teaching methods
  %%%%%%%%%%% Teachers --> Would face increased scrutiny and pressure, may need to adjust their teaching methods, might receive additional training or professional development
  ^^^^^^^^^^^ Parents --> Likely to voice concerns, may demand changes, and could be involved in improvement initiatives
  +++++++++++ Students --> Might face more rigorous coursework or increased homework, may also receive additional support or tutoring


RTEdge looks very nice

What exactly is being shown on the map ?

And which libs did u use ?


https://RTEdge.net

is our global, multi-cloud network! We build on this foundation, a real-time coding/hosting platform using nothing but the web standards and APIs.

On the map, you can click on the nodes and edges (lines) for additional info! This map will be embedded into our marketing pages, which we are in the process of writing.

---

As a teaser, watch https://rt.ht/yt (20s video about RTCode.io – a web playground with reload-free, 2-way input ⇄ output sync; service workers user can code in-editor ( https://sw.rt.ht/?io ) and deploy to the Cloudflare's network ( https://sw.rt.ht ), and other features)!


This really does seem like an AI-generated response. WTF


You hit the nail on its head right there. It’s all based on a doomed technology that is not going to give us the benefits it promises. I do think future models will improve on this, but there’s likely very little room (read: data) left to make that happen.

Developing symbolic reasoning further would likely be a much better use of researchers’ time, even if it takes longer. But the incentives in the short term just aren’t there, sadly.


I'm the creator of rqlite[1], an open-source distributed database written in Go.

It's saving me time (sometimes 2x speed up on certain, well-specified, tasks), and I enjoy using it. I wrote a blog post with some details on how it has helped me code the database: https://www.philipotoole.com/what-did-gpt-4-find-wrong-with-...

That said, the most recent release of GPT-4 seems a little more buggy[2].

[1] https://www.rqlite.io

[2] https://news.ycombinator.com/item?id=35970711


I’m glad I’m not the only one that noticed. I’ve been having better luck with 3.5 recently as long as I use the API to give it some examples to start with.


Reading your blogpost it wasn't clear to me how you ran your code through GPT-4. Which prompts did you use?


I brought up the web UI, and said something like this: "Here is a Go source file, see any problems, issues, or suggested bug fixes? <paste the Go code>"

Simple as that. Of course, sometimes I got the "message too big" error. So I pasted bits of the sources files, choosing pieces I thought were reasonably self-contained. I also fed much of my unit tests through it, asking "see any missing test cases?" While some of the answers were not that helpful, digesting the feedback from GPT-4 made me think more about my code, and make some changes for the better.


Nice, that's simple enough! You may want to try talking to it via the API, last time I checked the web UI doesn't accept prompts >4K tokens while GPT-4 via the API has a 8K Token limit. And then there's the 32K version...


I can't believe they are only charging me $20/month for access to GPT-4. I'd pay more for it.


I’d pay more if it was faster, had API access, and stronger privacy.

Right now Sam Altman is channeling Zuckaberg by claiming to be the good guy changing the world, but in reality hoarding data and asking congress to build him a moat.


Exactly. Not only that, but according to a recently filed lawsuit they’re providing preferential access to YC batch companies first and then everyone else.


1. It can do some reformatting tasks faster than I can do them by hand. Example: Inline FuncA into FuncB <paste code for both functions>.

2. For more complicated tasks it requires good prompting. Example: Tell me three ways to fix this error, then pick the best way and implement it. <paste error> <paste relevant code>. Without the "step-by-step" approach it almost never works.

3. It's pretty good at writing microbenchmarks for C++. They always compile, but require some editing. I use the same prompting approach as (2.) for generating microbenchmarks.

4. It's pretty useful for explaining things to me that I then validate later via Google. Example (I had previously tried and failed to Google the answer): The default IEEE rounding mode is called "round to nearest, ties to even". However all large floating point numbers are even. So how is it decided whether 3,000,003 (which is not representable in fp32) becomes 3,000,002 or 3,000,004?.

5. It can explain assembly code. I dump plain objdump -S output into it.

The main limitation seems to be UI. chat.openai.com is horrible for editing large prompts. I wrote some scripts myself to support file-based history, command substitution etc.


But 3000003 is exact in fp32, as well as it is 3000002.5, 3000002.75 before it or 3000003.25, 3000003.5, 3000003.75 after it?

https://www.h-schmidt.net/FloatConverter/IEEE754.html

And AFAIK the LLMs, lacking the ability to actually calculate, can't check which numbers are such?


Ha, you're right. I should've used 30,000,000. The answer it gave regarding how to do "ties to even" was still correct though: "Evenness" is decided by the least significant bit of the mantissa, not be the "evenness" of the decimal representation.


I had to optimize some Python code to reduce its memory usage. After trying all ideas I could think of, I thought about rewriting it in a different language. Copied and pasted the code into ChatGPT 4. Tried Rust at first, but there were too many compilation errors. Then I tried Go and it worked perfectly. For the next couple of weeks, I used it to improve the Go code, as I've never used Go. It gave me great answers, I think maybe once or twice the code didn't compile (I used it dozens of times per day).

I'm now using the optimized Go code in production.


I default to immediately asking GPT4 to review its solution and fix any mistakes it finds.

There’s also an interesting paper about providing it guidance that it can make a “tree of thoughts” which allows it to move forward and backwards as it comes up with solutions then present its best solution to you. The paper suggests you can squeeze a lot more performance out of LLM’s (even smaller ones) this way. I’ve been wanting to experiment with my prompting in this fashion: https://arxiv.org/pdf/2305.10601.pdf


What kind of memory efficiency gains did you see as a result of the effort?


I didn't keep track of the benchmarks very well. However, it went from taking 3+ days and 180GB of memory to process 80M rows [1] to processing 1.3B rows in ~6 hours using ~90GB of memory.

[1] I stopped the process early, as it was taking too long


I did this experiment (a game) to see what's up and what's down around all this: https://github.com/romland/llemmings.

While there is some GPT4 in there, it's mostly ChatGPT and a small handful of LLaMA solutions.

That project is a contrived scenario and not realistic, but I wanted to experiment with _exactly_ what you are talking about.

Very often I could have done things a lot faster myself, but there is one aspect that was actually helpful, and I did not foresee it. When inspiration gets a bit low and you're not in the "zone"; throwing something into an LLM will very often give me a push to keep at it. Even if what is coming up is mostly grunt work.

The other day I threw together a script to show the commits in a reverse order and filter out (most of) the human commits (glue) over at https://llemmings.com/


Agreed. So far I've derived most benefit from ChatGPT by unblocking me when starting tasks and also giving overviews about things (much like an improved search engine).


It has saved me some time writing queries in some obscure DSLs. Results were slightly off but close enough for me to run with it. Replaced my first pass of reading through the docs, but not really helpful beyond that.

I probably could have spent more time framing the query to get better results.


I've been using it to generate code that follows certain pattern.

For example, I saved 2 hours. I told it to generate a GraphQL resolver after inferring a Zod schema.

It followed the code conventions from other file.

It generated it beautifully.

Every time there's boilerplate, to ChatGPT it goes.


Writing in a language that I don't know that well, but I know enough. Okay, my javascript isn't the strongest, so I'd prolly have to spend 30-45 minutes just coming up to speed again on my basic AJAX and modern syntax, or BAM write a schema of my idea and get GPT to get my idea on paper with halfway decent style, syntax. I can take it from there.


One of the first things I tried (which I thought would be harder) was “Write a Python program that accepts a YouTube url and a start and stop time in the form MM:SS and a file name and it extracts the video between those times and converts it to an animated gif. Also provide a list of packages to be installed.”

The resulting code worked. Took an interaction or two to add usage info and tweak things, but it’s a neat little utility. Due to the libraries used, it was also simpler than I expected and would have been easy to write if I knew about those libs, so I also learned something.

It’s just one or two steps away from just saying “Go to this YouTube URL and extract the video between 3:20 and 3:27 into an animated GIF named ‘CatAndRaccoon.gif’” and having it write, debug, and execute the code.


Also had it hallucinate absolute garbage from time to time. The worst offender, that consistently shows up: Claiming there are certain library features, when there are not.

Easily the best rubber ducky though. Copy pasting big blocks of my own code and asking about it gives new perspectives to problems I am stuck on. Huge time saver in that regard.


I’ve been very happy with most questions and coding queries. The worst results I’ve got were ones related to external libraries. We use Telerik UI for Xamarin components and Telerik has similar libraries for ASP.net, WinUI, JavaScript, etc. and the controls often have the same in different libraries and different methods and properties. Try as I might to steer it, it has yet to produce any code using these components that doesn’t mix methods. And as others have said, it will argue with you about those features or it will say “Sorry, that method is for the WinUI version. Here’s a corrected version:” and then make the same mistake again.


Oh yeah, I agree with that worst offender - features that don't exist that it quotes so confidently.

Especially when libraries get updated constantly - I ran into cases where it wrote functions based on previous versions of the library, where the function names and signature had already changed since.

I haven't tried yet, but I bet if I used a version that has web browsing and can read the latest documentation, it would do a good job and save me having to read the documentation myself and try to work it out.


I have found it completely fucking worthless for anything I have thrown at it.

Computing how to dilute a concentrate into a fruit punch, how to create and render a template in golang, how to parse an int in golang, what makes a chord progression a "lydian" chord progression... it goes on and on.


Ikr? Even with GPTPlus it would still hallucinate things in 90% of cases I’ve thrown at it, so by the time I fully debug its output, I’ve spent about the same if not more time than just figuring it out on my own using good ole Google (or an alternative search engine) and StackOverflow.

And that’s for things I’m fairly competent in, to judge the quality of an answer.


I was writing code for a Raspberry Pico and a beginner at it. The Pico is hooked up to a little display with no documentation other than a few horrible convoluted and broken-english examples with hundred line functions, and I had no idea what to do to just display some things on this little screen.

So I simply dumped the code examples into ChatGPT and said "Given these examples that display text and boxes on a screen, can we write a simpler interface and straightforward small functions for the display code?"

And it was done.

This wasn't code I wanted to mess with, I really just wanted to build my application rather than spend any time messing with the code to interact with this proprietary display. It was fantastic!


I let it write the boring stuff. Needed a python class to take an example dict, and generate a class that creates all the properties that map to the keys in said dict (I have a constraint preventing me from using something like json_dataclass or similar). While it's churning that out, I can focus on other things.

Also found its great for regex, I'm pretty good with writing these, but recently came across a pretty complex one in our codebase that wasnt commented. Pasted into GPT4 and asked it to explain it, it broke down each bit in detail, and in the end even generated an example string that would match it.


AI has replaced 90% of my SO searches. I now don't contribute anything back and have to trust unvetted code that tends to have defects

Coincidentally, SO seems to be worse. But then again, I only use it 10% of the time so I might be wrong


Garbage in, garbage out, as they say. Though I still trust SO waaay more than GPT, and so deleted my GPT account for good.


I documented my process of building an email-auto responder with ChatGPT, not even GPT-4: https://dearai.substack.com/p/coding-gabrielle-six-chatgpt-s...

I only have a little coding background, and ChatGPT didn't make me an expert. But the collaborative Human+AI process did allow me to complete a project end-to-end, including figuring out where to host it and how to do that.

I found that it helped me with 6 "superpowers": 1. Choosing between options (e.g., AWS vs. GCP vs. Zapier) 2. Walk me through it (e.g., how to set up a Firestore database) 3. Text-to-code (including simple nuisance calculations and code-to-code changes) 4. Help me out! (i.e., fixing broken code based on error messages) 5. Teach me (e.g., learning the difference between let, const, var, etc...) 6. Check my code (e.g., it caught errors before I even ran the code)

Check out the post for more details if you'd like!

There's another post on building a website from scratch where I also tried Replit's Ghostwriter. Yes, I faced a lot of frustrations in the process, but going from "I can try struggle through this on my own" to "I actually have some help here that's always available and usually right" is amazing IMO.


Writing small python or bash utility one-off scripts to do various things.

For example, a utility to use the bitbucket api to dump all of the environment variables configured for a pipeline.


This one was using Bing AI chat:

I had a bunch of PDF files with coded file names, like ABC-123-abc-999-001.pdf, where each section of the file name was meaningful. Inside the PDF were several form fields. I needed to insert records in a database for each of the 97 files. Easy, but tedious.

I prompted it with a description of the file name breakdown, where the text to grab was in the PDF, and then asked for a Python program to find the files (in subdirectories), extract the PDF text, and write a text file with the SQL Insert statements for all the files.

It took two or three minor iterations, but less time than it would have taken me to write from scratch because my Python is rusty. Regular expressions, PDF processing libs, file system traversal, and SQL generation, and it all compiled and worked from the start (the iterations were to tweak a few things I didn’t specify).

This is the kind of thing that I think is perfect for these tools. With the new tools that let the LLM compile and execute code, it will be cool (and potentially dangerous).


Adobe automations are written in a truly bizarre language that, among other things, starts counting at 1 instead of 0. ChatGPT cheerfully obliges.


Fortran, Matlab, R, and Julia are a few languages than count from 1 instead of 0. It's not bizarre to do so.


Now, starting to count from -1... that's a horse of a different color


IME GPT-4 is bad at doing things and great at looking things up for you. Rather than trying to get it to do things, I ask it how I should do it (and argue with it about any hallucinations that come out—a process that often teaches me a lot!) and then I go do the task myself, copying whatever I need from the conversation but not relying on it slavishly.

It’s basically a fuzzy inverted index for public docs and code. “What’s the normal way of doing X”-type queries work best, with quality falling off quickly as complexity increases. Stuff like “what is a ‘git log’ command that only shows commits containing a particular snippet in the diff, limited to merge commits on master”, for example.

For more complex tasks, a trick I’ve seen work well is “give me an outline for how to do large task X” followed by “let’s go through each step in the above outline. For each one, I’d like a description of how it should be solved, including example code. Let’s start with the first step.” But that trick is not totally reliable and has its own complexity limit.


Top tips:

* If you can write it from memory, go ahead and do so. Do not consult GPT-4

* If you know what to do but need to look a few things up - put your best effort into GPT-4. It will flesh it out

* If you're using a library that is new, you can copy paste the library examples into GPT-4 and then describe what you want to do. It will give a great starting point


>"If you can write it from memory, go ahead and do so."

I know how to for example divide and multiply on paper. Still I use calculator for that.


My calculator has never given me a wrong answer, so I trust and use it. Every time I have used ChatGPT to solve a coding problem it has had missed important edge cases or been flat out wrong. I find most tasks significantly faster to do myself, but I have found inspiration in a wrong answer that ChatGPT has given me.

Maybe it's good at trivial stuff, but then I don't need it for trivial stuff so why use it then. This example is more like saying you'd use a calculator to solve (5 / 10) when it's faster to solve that yourself.

Now if I was just learning my times tables, sure I'd use ChatGPT, but once I've learned them it's not helping me on the basics anymore, and may be actively hindering me.


You seem to be saying that you're getting a speed-up even in the first case, i.e. where you can write the code without having to think hard or look anything up. If so, how do you do it?


You got it mixed up. Yes I can write some code (multiply on paper) without looking it up but it will be slower than asking GPT and then doing cut and paste with maybe some little editing.


I'm great with backend software but not so great with frontend, so I had GPT4 write me a simple React app to test out some endpoint. Works really well. After the code was working, I pasted the app into it and asked it to make it "more visually appealing" and it did.


I use it to get more programming done, and take care of all the things that comes in the way:

A big client asked us to fill around 200 questions in an excel sheets, regarding our company security.

Then he asked for a cybersecurity standard document (like a big thing around 50 pages)

I took the previous excel sheet, removed noise, anonimized it. The n pass it to gpt3.5 to summarize for each security category (access management, code source security,...) the bulletpoints answering how do we implement that

To finish, I passed the bulletpoints for each category to gpt4, to write a nice bullshit document which sound more professional than me


Autopatching vulnerabilities on my network.

I have a set of tools which build prompts describing the environment and the output of vulnerability scans. The tool then requests a shell script to disable/fix/update the vulnerability. The script is submitted as a PR which has actions that run integration tests. Human intervention is sometimes needed, but the focus is on better engineering of prompts (and by proxy tooling).

Describing the environment relies heavily on my CMDB (Combodo’s iTop) so this is not a one-size-fits-all approach and this is functioning entirely in my personal lab of ~100 servers. That said, ChatGPT has given me the best results compared to locally run LLMs


I've been using it to refactor Clojure codes and port Scheme examples into Clojure.

Maybe because Lisps have very low syntax, I find I can just copy off function/macro definitions as context and GPT3.5 can rewrite the code into something I can use... And by testing in a REPL (in :dev) I can instantly see which code has hallucinations and which work perfectly.

Tbh I find it hallucinates mainly when dealing with mutable states (e.g. atoms) or with pipelines of complex maps transformed by multimethods -

Just using small data bits and normal pure functions make GPT3.5 work perfectly because it doesn't need to take into account code outside the 2048-4096 tokens it's thinking right now?


I have tried several combinations of converting one programming language to another. One combination that worked OK, with some manual corrections, was Common Lisp to Clojure. I think Python to Common Lisp worked OK for pure code


I'm launching a financial planning business right now but I was recently using it to write xpath queries to use with "xmllint --format".

I then used it to generate a more robust Python script processing these large XML files instead.

I like using it to generate scaffolding and for debugging but I haven't had to touch a legacy C++ code base for a few years.


I’m using it to do the mundane tasks of unit testing and (some) documentation. I find that the code it spits out isn’t perfect but getting some boiler plate and fixing it up is pretty fast compared to writing from scratch.

I’ve used this enough that I wrapped some cli glue around it and wrote https://github.com/radoshi/llm-code

I’ve used this mostly to write Python and bash, with some Makefiles and Dockerfiles thrown in.

GPT-4 is better, albeit slower, than 3.5-turbo. HTH!


Text categorization for podcasts.

The prompt asks for specific aspects from a podcast - people, dates, locations - score them by relevance and count the occurrences. Now, GPT can't count for a damn, but it is a useful proxy. The relevance score is pretty good.

There are existing services that can do this, but with GPT and the API I don't need to read a manual and I define exactly what format I want back.

This is what is exciting - GPT will format its responses to _my_ requirements, not the other way around.


OpenSearch (or ElasticSearch) query building. I was new to the technology and their syntax took a while to wrap my head around. Instead I'd just tell ChatGPT my document format and then ask for specific data in natural language.

Fair warning, the queries were not always perfect on first try, but it was a lot easier than parsing replies to somewhat similar questions on stack overflow. Now I mostly write my own queries but it really helped me get started.


I'm using it for single file programs and building command line utilities, it excels at building apps from scratch, but struggles to follow code it didn't write.

I've had so much success that I built my own command line utility (https://github.com/0xmmo/codemancer) to use in the VSCode terminal and my side projects are now ~70% written by LLM.


When I finish writing a method, function or larger chunk of code that works well but could be simplified or optimized, I ask it do it and it's surprisingly good at it.

Sometimes I ask it to write complete classes for me. My longest prompt was full page long and it wrote a county-city autocomplete from a database: the back end in Go, the front end in plain vanilla JS with a non-jQuery based lightweight library (I asked it).


Charts on charts on charts

I've converted from an excel wiz to python, but making charts was always the bane of my existence in python, until GPT. And I personally use 3.5 more than 4 because of the speed, but I used 4 when I need something critical or I know I need it to balance multiple thoughts.


Given a 2021 cutoff, I'm not surprised that rust and Nix are too new for it to do well with. I've been using it to avoid actually learning what manifest.json looks like for Chrome extensions.

One thing that helps is telling it it's wrong or missing a case or whatever. It'll type out a fix (assuming it understands, which it frequently does) faster than I can.


It's great at log pocessing (generating commands for awk, sed, complex grep regexps, shell scripts to combine it all). Anything where I'm not an expert but need something very basic done quickly (e.g. the bulk of my day job is in C++, but I frequently need little bits of Python and ChatGPT is often the quickest way to get the answer I need).


Ha, yeah, GPT* sucks for nixos. I think this is due to

* NixOS development being pretty high paced, [with certain developments (like flakes) post-dating the original GPT cutoff still?]

* Documentation being slow to show up on the internet.

* Just in general lower popularity (than windows or ubuntu or etc) leading to less data for ChatGPT to pick up on.


I've used it a lot for experimenting with new frameworks I haven't used. I recently used it to do a project in Deno with Oak and wasm-dom. It got me 90% of the way with a very accurate bullet point list of what it needed to do. It makes me think about the function of my code a lot more than the exact code representation of it.


I know how to program. I understand lisp. What I have problems with are all the built in functions and variables in emacs-lisp. It has dramatically improved the readability of my emacs configuration and also taught me about some new ways to do stuff in emacs (and when to stop because I can’t do something).


I've used it to SPARQL query to explore wikdata data a bit. I'm about at the breakeven point where hallucinations mean it would be worth it to just learn the language, but I think it clearly saved me time to write some simple queries without knowing the language at all


Today I used it to rewrite a function in llama.cpp that compiled in gcc9 but not gcc7 (an avx2 intrinsic wasn’t available… idk specifics, I barely know C/C++). First attempt was wrong, so I had it write test code to debug. Using the test output it fixed the code afterward.


I use it as a sound board to think out. Also it's helpful in certain programming tasks that I am not familiar with but popular in certain circles. It's great to get up to speed in knowledge without reading tons of articles or wikipedia.


It works well for non-programming tasks, writing the bullshit career development I'm forced to write and crap like this. It's absolutely useless for software development though.

I just use Amazon CodeWhisperer as a nice autocomplete but that's it.


Nah. It’s pretty useful for software dev too. Granted, you need to be able to vet the information you’re getting back to an extent, but I can’t even begin to calculate the boost in productivity I’ve had when learning a new language or library since I’ve started using ChatGPT.


I find that it's great at creating a starting point for react components, using material ui components. Generally, I find it more helpful to review the code it produces than to produce it myself on the first pass.


Optimizing SQL queries. Give GPT-4 a Postgresql EXPLAIN output together with the query and it gives very nice results (you do need to iterate a bit). Has steered us in a better direction quite a few times now.


Here's roughly how my conversation with GPT4 went

Me: I want to make an svg editor, give me some suggestions. I mainly want something with mobile support.

GPT4: gives me some options

I look over the options and choose fabricjs

Me: Start by loading an svg at a predefined url

GPT4: <code>

Me: Ok now implement a save feature, send the json to this url ...

GPT4: <code>

Me: The text is loaded as an image, I want the text to be editable

GPT4: <code>

Me: I'd like to add some google fonts to the text editor

GPT4: <code>

Me: The fonts aren't loading, I think we need to load the fonts first before initializing the canvas.

GPT4: <code>

Me: Ok add a undo/redo feature

GPT4: <code>

Me: Let's add some clickable buttons instead of hotkeys, here is the html..

GPT4: <code>

I probably could have done this myself, but frankly it would have taken me a long time to figure out the fabricjs api. It probably saved me at least a week making this thing.

here's the live app: https://tinyurl.com/2dhh58cn

and the code: https://tinyurl.com/2tu4xrtn

you can tell the GPT generated sections by the (overly) verbose comments


TypeScript and React - I feel like it’s good enough to get the ball rolling on some annoyingly complex topic like flick-to-animate drawers; but once it’s got a scaffold I can usually take on the rest.


I had it write a regex for me the other day. It was super convenient!

I just didn’t feel like looking up the language specific regex syntax I needed and poring over the verbose examples for an hour.

Worked perfectly.


I use it for complex terraform (hcl2) dictionary/list comprehension... Give it a sample input, then the desired output... Works wonders!


I had a bunch of matrices dumped by a Matlab script that I needed to paste into a Python script. I asked ChatGTP to write a sed script to reformat the matrices into numpy arrays.


I've been using it to do image processing in opencv, it's saved a lot of time I would've spent figuring out the required transforms and matrix operations


I got it to generate ladder logic for a PLC, and translate from Siemens to Allen-Bradley.

Trick is you tell it you are going to import/export with XML files and it works in them.


Snippets of code in multiple languages, generic help like what Linux command / sequence / options do I use to do this and that / etc. Serves me well.


I am trying to build react apps with TDD. I've successfully built a todo app with just prompting. Add more features to it now.


I offload dumb tasks to it like bash oneliners or small pieces of code refactorings

Essentially it is an advanced autocomplete.


It is great for any type of shell scripting. Also works well for quickly fleshing out type definitions


Logql queries for Grafana. Explaining regex in plain english and having to translate it


It's really good at generating sample data and reformatting.


writing regexes


converting code from php to Go


I was able to get ChatGPT 4 to produce a working websocket server in rust fairly quickly I know rust but had no experience with the networking crates or async runtimes.

Getting it to also serve HTTP, it fell into quite a few issues. Part of it was not telling me I needed to enable a feature, and part of it was that it's knowledge was quite a bit out of date.

I actually filmed that whole interaction here:

https://www.youtube.com/watch?v=TFsbMGSOeCY

I also was able to get it to make a working (though extremely basic/naive) SAT solver in J. J is pretty far out of the mainstream, so I had to go through MANY rounds of correcting it. (That was the only time I used up all my ChatGPT4 prompt quota for the 3-hour period.)

Since then, I've stumbled on the technique of presenting it with a rough plan or idea and then iteratively having it ask ME questions about what I posted, and summarizing everything we've agreed so far, rather than just immediately writing code. I find that it's actually pretty good at pointing out things I hadn't considered (security and scaling questions, for example), and asking for clarification.

Most recencly, I've started using it to help me get past the learning curve in languages where I'm not fluent at all (making an animation in Mathematica, and discovering how to do some simple things in smalltalk).

In general, I try to ask it for the most minimal/general example it can give me that shows what I actually want to know. For example, building on the rust web server thing, I asked it to give me the structure for building a restful API with certain endpoints (which "we" worked out "together" using the iterative design discussion method) but just leave the implementations blank, because they would be un-necessary detail for it, and I already knew how to do that part.

Aside from that, I've used ChatGPT 3 in a non-chatting context through GitHub Copilot, and that is a whole other ballgame: it's basically a plugin for Visual Studio Code that acts like a super-smart autocomplete.

It doesn't always guess what I'm about to type correctly right, and the wrong suggestions are occasionally annoying when I'm pausing to think through how to word a comment... But very often now when I start to write a function, several whole lines that were just a vague idea in my head suddenly appear on my screen exactly as I would have written them. (And I mean exactly, including my sometimes unusual code formatting choices...)

I'm still on the waitlist for copilot chat, which presumably is just ChatGPT but insta-trained on your codebase... I'm very much looking forward to trying it, though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: