Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What's your favorite GPT powered tool?
263 points by surrTurr 11 months ago | hide | past | favorite | 243 comments
There have been many tools powered by GPTs coming out over the past few months. Too many. Which ones are actually worth using?



Am I the only one who isn't using ChatGPT on a regular basis?

Though I've made a few attempts to use it, generally I already know the answers to most trivial questions. And by going into more complex questions and scenarios, it becomes apparent that ChatGPT lacks a deep understanding of the suggestions it provides. It turns out to be a pretty frustrating experience.

Also I've noticed that by the time I've crafted intricate prompts, I could have easily skimmed through a few pages of official docs and found a solution.

That said, considering the widespread buzz surrounding ChatGPT, it's entirely possible that I may simply be using it incorrectly.


Perhaps that is because you haven't used my top ten prompts for software devs who are worried about losing their jobs!

But also yes, it is not the greatest thing ever. It is mostly mediocre for me as well. And lots of the most exciting things, like DMing a dungeons and dragons campaign, is actually mostly terrible because it recreates the same scenarios over and over and it never remembers any of the information that a DM should know like anything on your character sheet, when to ask you to roll the dice, or even which version of the rules its supposed to be using even if you ask it directly to use that particular version of the rules. But in fact, it is wild that it has read all of the rules. I was using it to run a star wars saga edition game for myself (I'm not that lonely I promise). And at some point it occured to me this is all copywrited information, is it allowed to read this and then regurgitate it to me? How much of this is fair use? If I wrote a campaign for the system could I regurgitate rules like this in teh campaign book? Who owns all this material that is created not from imagination but from highly complex combinatorics?


I don't want to come off as self-promoting, but the DnD DM problem is exactly what we're working on solving. If you build the infrastructure around the ruleset and leverage the AI for the story telling then it becomes much better than the vanilla ChatGPT experience, i.e it can reference your character sheet, ask you to roll for skill checks, do combat, etc.

It's far from perfect but we're continually working on improving it and you can check it out here if you're interested: https://www.fables.gg/


Game rules are generally not copyrightable. The particular written expression of game rules may be, though. So it becomes a question of how much it took from the game books and whether it presented them verbatim or in a new way.


Please share your top 10, or for that matter top 20 prompts of ChatGPT.


ChatGPT has zones of competence and your opinion of ChatGPT is likely to be a function of whether or not its competence zones overlap with what you are doing.

Early on, ChatGPT knocked a bunch of highly technical questions I sent it out of the park. It trivially reproduced insight from hours of diving through old forums and gave me further info that eventually proved out. More recently, it has completely hallucinated responses to the last 3 technical questions I gave it in its familiar "invent an API that would be convenient" style. It's the same ChatGPT, but two very different experiences as a function of subject.


> Early on, ChatGPT knocked a bunch of highly technical questions I sent it out of the park

I hear this all the time but never with a transcript. I wonder how much experts “read into” responses to make them seem more impressive. (Accidentally you understand, no malice). Or if in the moment it feels impressive but on review it’s mostly banal / platitudes / vague.

The few times I’ve used it for precise answers they were wrong in subtle but significant ways.


Ask it about cleaning a toilet, and then go deep on bacteria and fungal growth etc. For areas in which you have no expertise, but have a simple understanding, it will grow that knowledge tree.

My apologies if you are an expert toilet cleaner, the point is it's more useful than how-to-wiki or YouTube for getting you up and running or refreshing info you may have forgotten.

Avoid asking it for VERY obscure things you know nothing about, because it probably doesn't either, but won't say, and the hallucinations start.

The falsified stuff can be pretty awful, and it has a tendency to double down.


This aligns perfectly with my experience. I feel like I haven't found the correct way to utilize these tools yet as, just like you, whenever I attempt to use them for some intellectual work it proves to be a huge waste of time and the old manual hunt for information and DIY approach is much more effective.

I know a lot of people leverage gpt for basic writing tasks. I'm confident in my writing ability and enjoy writing so I don't use it for that either.

I might try using image models at some point to generate some pixel art, but on the whole I've found these tools pretty useless and am left wondering what I'm missing. To me it seems like they only work for a very specific case: "I have no background knowledge in domain X and am ok with a quick solution that is adequate but likely not optimal, and I don't need to worry about about correctness." Unfortunately, that's completely out of the realm of the sort of work I like to do.


One thing I like to do with it is telling it to act as a machine that, for every text I paste, it outputs specific academic subjects and their respective courses that dive deep into that subject, so I know where to learn more.

Another option is you want to learn about X, you can ask ChatGPT to give you a list of authors or subjects that discuss that topic deeper, which give you a head start, as you can then ask follow up questions such as where their central argument fit in the bigger scheme, which were more influential etc.


It’s very important to qualify this with which model you are using. 3.5 and 4 are very different in the quality of answers they give, the version numbering is deceptive.

As a side note, although I can’t confirm this it seems like the general quality of answers for both 3.5 and 4 decreased over time. I suspect they are doing further RLHF which has shown to make the model stupider for lack of a better word.


3.5 and 4 are basically completely different products. 3.5 is an great free tool that you can use to tidy up your emails before sending them, and that can summarize and explain simple concepts to you so you don't have to wade through Google search results or scroll through long Wikipedia articles.

GPT-4 can actually do stuff: it hallucinates an order of magnitude less often, generates working code, and can explain complex subjects with nuance. It's still a bit of an idiot: it's output is intern level at best, but having a lightning fast intern at my disposal 27/4 has already revolutionized my workflow.


Using 3.5


There are probably things you could do to get more out of 3.5, but 4 is waaaaaay, way better for every complex thing I've tried. 3.5 was helpful on some margins when I remembered it, but 4 is a huge part of my workflows now.


3.5 vs 4 is like day and night. Try to get you impressions on v4


I must have a play with GPT4 at some point. For me, 3.5 generates working code more often than not and rarely hallucinates.


Not the OP but good to know. How does 4 compare to Bard?


3.5 is arguably better than Bard on some coding use cases and higher accuracy. 4 is just much, much better than both of them.


There is a trick to get better performance out of 3.5, examples specifically in Q&A format. You basically have to show it how you want it to act first and then ask your question or whatever. It can be a pain to craft a good prompt but the really important and amazing thing is that the knowledge it gains from the examples is transferable to other similar tasks! So you don’t have to answer your own question, just show it how you prefer questions to be answered and then ask the question you want the answer to (or tell it to perform the task you want it to) and you can reuse the prompt for other questions.


One mistake I've seen people who are unhappy with GPT make is they ask it questions they already know the answer to, topics they are an expert in, and then tear down the response.

Try asking it things you would google for. I'm finding it to be a pretty good "I'd like an answer to this question, I don't want to wade through a bunch of google results trying to find the one that answers this question" engine. Sometimes it fails, but often it's really good.

This is probably why google has an "all hands on deck about AI" going on right now, I've been using google a lot less lately between bing and chatgpt. However, last week I noticed that Google Bard provides an answer PLUS gives reference links, which ChatGPT can't do.


If the answer is wrong for topics you do know, why trust the answers for things you don’t? What’s going to be the term for Gell-Mann amnesia[1] but applied to LLMs instead of the media?

[1] https://web.archive.org/web/20190808123852/http://larvatus.c...


The questions I ask on topics I am familiar with are usually very demanding. On these it is clear that there are limits to its comprehension and reasoning abilities, but nonetheless, I find it very impressive.

Questions I ask on topics I am not familiar with are much further from the limits of its knowledge. I find it to be an amazing tool for quickly getting a structured overview of a new subject, including pros and cons of different alternatives and risks I should be aware of.


You should have a healthy dose of skepticism about anything you read online.

The examples I'm thinking of where people completely dismissed ChatGPT were asking things like "tell me about <MY NAME>", "Explain this thing I wrote my thesis on".

In other words, throwing a toy problem at it, getting a bad answer, then making up their mind that it's not useful.

I'm not advocating blind trust in it, I'm just saying don't try a couple things and decide it's garbage. You're doing yourself a disservice.

I love asking it for unit tests and docstrings for my code. Is it perfect? No. But does it give me a starting point for something that I otherwise might not do? Absolutely.

I've asked it a lot of things that I'm familiar with the topic and can immediately eyeball it's answer, either because I'm having a brainfart or because it's a little fiddly to work out. It can be good at those.

I've asked it a lot of questions with things that I'm not at all familiar with, probably the best example is Powershell, and it's provided good answers, or answers that were at least able to lead me in the right direction so I can iterate towards an answer. But then I also tried to use it to write an AWK script and it failed miserably, but got 80% of the way there. Which, honestly, is about the best I've ever been able to do with any complex AWK script. :-)


I think GP means people are being too perfectionist with GPT-4. Between things you're an expert on, and things you have no first clue about, there's also a vast space of things you understand to some degree - enough to let you apply intuition, common sense. In this space, you'll be able to spot rather quickly if GPT-4 is making things up, while still benefiting greatly from when it's being right.


Indeed, plus what are you going to do otherwise? Google it. And how will you vet the pages Google returns? Intuition and common sense, and perhaps an appeal to authority.


Exactly this.

And LLMs are a new thing too, it takes some getting used to. I admit I almost got burned a couple times when, for a few moments, I bought the bullshit GPT tried to sell me. Much like with Googling experience of old (sadly made near-useless over the last decade, with Google changing the fundamental ideas behind what search queries are) - as you keep using LLMs, you develop an intuition about prompting them and interpreting results. You get a feel for what is probably bullshit, what needs to be double-checked, and what can be safely taken at face value.


Something to keep in mind too is that ChatGPT's model is from September of 2021, and so, as it will state quite clearly, it "has no knowledge of the world after that date."

I asked Bing's Search AI the same question and it claims that it updates itself every day to stay current on the latest news and happenings.

2 years is a LONG time in our current society, so keep that in mind when it provides answers.


> gives reference links, which ChatGPT can't do

Yes it can, you just have to specifiy that it do so.

I find that it tends to limit the hallucination, or hilights when it's hallucinating.


I haven't had much luck with that. The most recent example I'm thinking of, with ChatGPT 4, is I asked it a question and it's response included something like "can be set up by following the documentation". I asked "What is the URL of the documentation you refer to above" and it said "I'm a LLM, I can't do that."


I've been using it to learn programming concepts I'm unfamiliar with and I perceive a huge difference in quality of chat-gpt answers between:

- Salesforce API

- AWS

- React Native

The react native docs are good enough that I really empathize with what you're saying - wouldn't I just be faster reading the docs? However, for salesforce I feel exactly the opposite - the docs are all over the place and message boards tend to be of low quality for the subject.

For what it's worth, AWS felt somewhere in the middle depending on the service.


>- Salesforce API

I've used it to write unit test methods for Salesforce apex classes & triggers. For the most part, the unit test code it spits out is actually decent. Occasionally I'll have to clean it up a bit or make minor corrections. But it has greatly helped free up some time doing mundane stuff.


I have used it at the very beginning but I've been mostly disappointed aside of few examples.

Things it proved useful for:

- learning new programming languages and solving hackerrank or codewars katas in languages I didn't know. It sped up and lowered my learning curve.

- writing recommendation letters based on my inputs ("write a recommendation letter for X, knowing those are his strengths and what he's good at).

Things it proved terrible for:

- personal assistant. I was hoping to use ChatGPT as a tool that would help me reason. I did try to use it as a companion learning the new Epic Games programming language "Verse". ChatGPT proved absolutely terrible for this task. I would feed him paragraphs about verse calculus, from the pdf and then question him about what I fed him, but have few exchanges more and he would start allucinating.

- text editing. I tried to feed him paragraphs and then tell him, modify this or that. But it's capped in the number of tokens and it would make mistakes. Say I told him "modify this line", he would reprint it, but if I told him "hey, so what's paragraph 2", again he would get stale versions

- calendar assistant. I would tell him what my priorities and appointments where but it was very unhelpful to provide help and required much too much interaction.

In the end I just stopped using it, besides few niche use cases (which haven't happened recently) it just does not provide value where I want to find this value: as an assistant.

I'm still moderately optimistic about such use cases being a reality in the future, because the idea of having a companion I can converse with about my daily tasks and work tasks (sort of a jarvis) is really where I expect to find value.

Where I don't expect it to provide value is overhyped things like google or bing search, there I want to find websites.


Try w 4


I've played with it for writing text and it was fine for some boilerplate definitional stuff but I could have copy/pasted that pretty trivially too. Maybe could have saved me a few minutes with some starting point listicle stuff too--but would have taken a bunch of rework and additions to be publishable.

Anything that's not straightforward? Even if not flat-out wrong, it lacks any real nuance.

I also played with it for some trivial coding and, again, could maybe have saved me a few minutes but nothing earth shattering.

So, yeah, maybe I'm doing it wrong but it's not remotely a go-to tool.


ChatGPT4 has replaced ~99% of my Google/StackOverflow searches...


Yea, i'm similar. I use GPT4 as a sort of "thought/idea search engine". Where i use Kagi (my Search engine) when i need to find something i already know or when i know there is a concrete answer that i might (definitely) not trust GPT on. However i use GPT when i am exploring new concepts, learning terminology, relationships, etc.

GPT is super prone to error on any niche thing, be it subtle nuance or concrete details, but i don't actually find it to be any more or less reliable to the state of the web. Ie most content i consume from the web is full of errors, incorrect facts, etc. I don't find my suspicion of this to be any different than that of GPT. However GPT has a great way of having the relationships to error-ridden results in the web already established. I can ask it about this web of things to get a better understanding when i drill into concrete facts via standard search.

GPT is far from a gimmick in my usage, but i also don't think it's revolutionary yet. I use it because i'm exploring and finding it useful, but also because each new leap OpenAI makes is pretty damn impressive. This week they're rolling out Plugins i believe. Every step they take is fun to be with, fun to use, fun to experience. That alone is worth the $20/m for me.


Here with you 3. I use the warp terminal. They integrated chatgpt into it. I almost never leave my terminal now if I need to search something quickly.

If I forget some api or technique in some language, I ask it. Depending on its results, I may or may not start browsing for more details or to verify.


I agree. Especially because it seems like, almost simultaneously, Google's result quality has just tanked. It's difficult to find good answers to most things that aren't either behind a paywall or written by the company that made them etc. It helped me debug my tractor last week and helped advise me on the correct part numbers and stuff. Things I was unable to find on the open internet without confusion because everyone just had product pages with zero information.


One key thing especially for writing is to give it examples or a starting paragraph that it should continue from. If you just tell it to “write a document about X” it defaults to a very poor style or worse a list format. So instead copy paste in examples of your own writing and instruct it to write in that style or provide instructions and then write the intro paragraph yourself.


Using it effectively is hard. It requires developing pretty strong intuition as to what it's useful for and what it isn't, which means you have to work with it quite a bit before it "clicks" and you start finding it useful.

This is pretty unintuitive, especially given the hype around it.

Once you DO learn how to use it the productivity boost it can provide for all sorts of different things is substantial. I use it a dozen or so times a day at this point.

I've written a bit about how I use it here: https://simonwillison.net/series/using-chatgpt/


This is almost the same thing I say to people.

To ask GPT about something and be able to determine whether its answer is false or not, you already need to be competent and knowledgeable in that topic. If you're already competent and knowledgeable in a topic, you don't need to ask GPT about it because you already know it.

ChatGPT is not an expert, it's not even written to try to be one, it's fancy phone keyboard prediction with a random element added.

It's great for creative stuff like "write a cut poem about a puppy" but so far literally every technical question I've asked it has been answered incorrectly.


GPT-4 is decent [1] at generating SQL queries from a short text description.

[1] assuming you write raw SQL rarely if ever by hand


I felt that way until I stumbled upon a usage I didn't anticipate: Helping my teenage son with his math and physics homework.

Generally, I know the concepts he's working on but I've run into a couple problems. First, they don't use textbooks at his school. It's just poorly made slideshows from the teacher that don't clearly explain any topics. So, to get to parity with his understanding, I'd have to hear the teacher explain it.

Second, his work will generally reference names of laws, theorems, etc. that I'm not familiar with. Usually, I'd search for a document, Khan academy video, or some other YouTube video but this has been time-consuming.

I started asking ChatGPT: "Take on the role of a high school teacher in an advanced physics course. Give me a basic explanation of [insert law name here]. Please provide at least two examples and a reference URL."

This has been incredibly useful.


I use it extensively. For example I was getting an error that no variable X was on an object. I could see the object in the debugger and could see the variable.

I put the section of code into chat gpt and the error and it said that the function I was using returned a collection of the object and I was using it as if it was the actual object. Obviously I had not noticed the punctuation that the debugger used to indicate a collection.

I recently was looking for the melting points and hardness of various materials. Obvious ones were easily available, but what is the melting point of brick or granite? I put in a big list of materials and chat gpt got them all.

Finally today I had a shopping list and I asked chat GPT to organize by aisle for the store I was going to. It did a bad job, but it was mostly correct. I could easily fix the mistakes myself.

I think for many things chat gpt gets 90% of the way there and the 10% you have to fix is no big deal.

I really like it for generating sql queries and regular expressions. Two things that take me a lot of time that I do infrequently enough that I can never remember how it works.

A friend of mine said he feels like chat GPT will enable a resurgence in the generalist programmer. Im a solid developer but I dont do enough to be completely immersed. Chat GPT is amazingly productive for new areas or areas I dont do that often. Recently Ive used it for linux networking, oauth, some reasonably complex SQL queries, ruby threading, and working with the Ruby 2D API.


Could you provide an example prompt illustrating the general types of questions that you are asking, with which it seems to struggle?


It's been a few weeks ago, but I recall asking ChatGPT about models in Django, which on the surface it often gets perfect.

However, when I asked that the model in question needs to be unmanaged and backed by a SQL view (which I pasted), which had previously been applied from a migration script, it became pretty obvious that ChatGPT is just putting together sentences that are highly likely to be true, but not really understanding the architecture of Django.

So that's one example - and it's not even a particularly complex scenario.


I've been using version 4 for Django quite a bit and it has saved me quite a bit of coding. I'll just throw in my model and explain in detail what views and serializers to write and the code is near perfect. I didn't have that same experience with GPT 3.5.

And wow, is it convenient for writing tests. I just copy the entire views.py in and list all of the tests I want. The key is to be explicit with what you want.


try asking phind.com, if it brings up outdated info just give it a link to catch up on the latest stuff, it helped me grok next js server actions and convert some code to use server actions over api calls.


Can you provide the exact prompt?


I would have to redo it.. I cleaned up all my prompts.


I’ve never even tried it, so no. Docs are usually sufficient, and reworking prompts just doesn’t look that enjoyable to me.


I asked ChatGPT about a very subtle python bug - it had to do with how python default params are actually mutable and will retain their mutated values across invocations.

I'd forgotten about this weirdness, but ChatGPT explained it.

I also managed to get ChatGPT to write two pieces of fairly complex C++ boilerplate - one was a std:vector that used mmap() and mremap() to grow linearly rather than by a fixed factor (also avoiding memory copy on resize)

Then I made it write a vector whose iterator was an integer index rather than a pointer.

I made it write all the unit tests and benchmarks for these and it did everything correctly except not knowing that munmap() needs the size parameter rounded to the nearest page.

Obviously I hardly managed to get everything correct on a single prompt. It took an iterative conversation and successive refinement


Mostly same experience here. However there are a few things I did it worked kinda well for:

- Suggesting and writing YouTube shorts scripts about facts. However these turned out very sensationalised and less factual.

- I have a website with categories based on real world things. In the past I paid hundreds of dollars for mediocre short descriptions. With ChatGPT I paid $0.04 for good and long descriptions.

- I experimented with automated blog articles. It doesn't work for me because of their 'morals' but if you write boring articles about non controversial things without any opinions it works better than any (cheap) paid writer I had so far.

So far I haven't had a single coding issue I thought GPT could help me. I don't understand the buzz either.


It's not necessarily about getting help. For me, at least, it's about speed. It can look things up faster than I can. I can copy/paste/modify the results pretty quickly and every now and then it surprises me with a solution I hadn't thought of.


I gave it a try a few hours ago and it hallucinated a method with invalid syntax. Likely better with a more optimized prompt but definitely not what I expected.

However it was close, close enough that I could find the right method trough GitHub that I didn't find before.

Not bad, not amazing, but I see the potential


I only use it for fun, just to see how it would respond to a particular question.

At the moment, I can't find a serious use case for me. I find it really hard to guide the AI in the direction I want or requires extra work I'm not ready to put in [1].

[1] https://martinfowler.com/articles/2023-chatgpt-xu-hao.html


I have found it to be good in cases where I need a rough outline of something.

For programming, I often ask it for some modules in language X that can do Y.

Sometimes it surprises me and lists something I have not heard of.

Other times it makes stuff up.

I think to use it well, you have to be some what of a subject matter expert in the topic you are asking about.


I've tried! I want it to fix issues for me in the code. I would love for it to answer questions. But I use it very seldom because it kind of sucks.


Its not fixing things for me (yet) but I'm using it for brainstorming, for transforming long-form content to tweets, and prototype code generation.


no, i don't use it regularly either. my main use case is asking bing bot how to use a linux command that i don't remember off the top of my head.


Phind is a GPT powered search engine optimized for developers / technical documentation. It searches the web and tries to aggregate results from multiple web sites. Although there are instances where it references outdated versions of libraries, on balance, it significantly reduced my time spent on technical research.

https://www.phind.com/


For me, the beauty of Phind is that it will wade through the ad ridden web 3.0 hellscape to get an answer for me in plain text. I am not sure exactly how much of that process is to do with the language model per se. Probably basic vector store distance measures and traditional indexing paired with text extraction would do the job "just as well". I don't really need the natural language spin on top (I think).


+10 for phind. It's the only GPT tool I find that works well - for me currently.


I wish Phind would "autoGPT" a bit more, though. It often feels like Phind just grabs the top X results and makes a reply from those. If i was searching and those results came up i'd often skip them.

Strangely i use ChatGPT4 every day but am using Phind less and less. When i have something to search the web for, Kagi is faster for me. When i want to search GPT4 (thoughts/whatever) ChatGPT is faster for me. Phind is cool, but kinda feeling like the worse version of each.. if that makes sense.


It doesn't just pick the top result though. I was given an answer from SO that wasn't the top voted and not the "approved" answer, either. But it was, imo, the best answer it could've given me. I was pleasantly surprised.


Yeah, Phind has replaced Google for half of my searches. My only small complaint is reliability — occasionally I get “inference service not available” (paraphrased) and have to regenerate once or more.


Yeah Phind is the only AI service I use on a daily basis. Plus, free access to GPT-4 in answers is a bonus. They really nailed the experience


I love phind, but also like codeium and genie vscode plugins, really the tools I use the most. Sometimes I go back to GPT4 though.


I've used it many times for things that aren't technical. It's fantastic. I love that it provides references.


link to the docs and tell it, it's wrong about version, or ask for the features of something like nextjs 3.4 and it'll refresh it's context and give much better results. I don't know what their secret sauce is but it blows bing out of the water.


I was instantly a big believer in Phind.

It has replaced 95% of my previous DuckDuckGo searches for development info.

I even used it with another developer to solve a mission critical bug based on some very vague symptoms. It's saved so much time, I'll never go back.


Phind is one of the stickiest apps I’ve ever used, reminds me of when I first used Google Images… now I use it constantly

It also seems much more useful than ChatGPT or StackOverflow by themselves


I built a tool that gives GPT a Docker container to run commands in to accomplish a particular task. I find myself using it for things like simple file conversions (it mounts the current working directory into the container). It can install software in the container so it’s like a chatbot with the entire apt universe at its disposal.

https://github.com/drifting-in-space/botsh


wow that's a really useful little sandbox. Definitely will play with this later


So, everyone's favorite is the one they built themselves?


I think that's the big success of GPT. Imagine if you asked "what's your favorite type of a Google query", nearly everyone would have a different answer. In particular a substantial percentage of Google queries are only ever asked once.


Yes, because GPT-powered services are usually just lousy abstractions over the OpenAI API, and implementing them for your own custom niche will work much better.

For example, I tried Otter's meeting summarizer, and it was incomprehensible, but when I implemented it myself with GPT-4 I got stellar results.


Maybe that's the power of ChatGPT. That it is so adaptable to very individual use cases that people can apply to it to precisely their context.


Based on how easy it can be to write ChatGPT Plugins when GPT can write them for you, i could see this becoming more and more true.


Yeah that was a red flag takeaway for me from this thread. How much of the AI hype is someone trying to sell me on something they made or will profit from?


Take a look at your sibling responses. In contrast you’ll have a working definition of cynicism.


Seriously.. This just turned into a "Show HN" comment section. Half of the ones I've seen are like "Still in early development"


Practically. Or at least, will be. I don't have all my chips on it, but I've been working on jupyter and langchain tooling.

This feels remarkably like when we were arguing over scrypt vs bcrypt and used to benchmark our GPUs by seeing how many dozens of bit/litecoins we could generate in a night.

In four or five years, we won't recognize the world we're living in today.


Yup! At least for me, I wanted ChatGPT to be native and keyboard-first :)


Got an example of it? I too want a nicer UX around my ChatGPT Plus sub (ie not API based).

Frankly a cli version that could pipe into my code editor (Helix) would be amazing.


I saw someone's workflow where they taught it to make its own shorthand language. Interestingly everyone ended up with their own shorthand, and yet the GPT-4 was able to understand all the individualized stuff just fine.


(apparently) Unpopular opinion: It's pretty cool that there are so many "look what I made" responses here. This feels like what happens in a renaissance; there aren't the tools so people are making their own. It might also be demonstrating that the tools are helping to speed innovation, allowing people to "punch above their weight".


My interpretation is that there are no great tools, otherwise there would exist goto answers. It's been out so many months that there at least should be a handfull of them if they actually were great.


I use Phind.com + co-pilot + GPT-4 to write code daily. I just tried using Bard for summarizing my blog post as it has a larger context window than GPT-4 and it worked really well, so I'm adding that to my workflow.

Separately I like combining serper.dev and Scrapingbee with GPT-4 / langchain to summarizing scientific articles and news for me on top of my (ugly non-sharable) AI scripts, but they're basic


I don't get phind. isn't it the same as just chatgpt extension to google?


I am using ChatGPT to convert some very brittle tidyverse (R) code into base R, and it’s outstanding. I could finally learn (and then try to remember) all of R’s conventions but it’s a lot more pleasurable to have the chatbot actually do the converting from ideas into code.

The context is that I am someone who codes in intense bursts every few months, so a lot of the details never really transition from short-term to long-term memory. ChatGPT is perfect for this.


Just ChatGPT itself. It’s my therapist, stackoverflow, Wikipedia, and many other things. Best $20/month I’ve ever spent.

I’m skeptical of most prompt-based tools. I’d rather just get to the source and tweak ChatGPT to talk about exactly what I want.


I still use the "vanilla" stuff.

GPT-4 has been superior to Phind in my experience, especially on technical questions. And the web browsing beta closes all the holes remaining. Downside is speed, but it's a fire and forget thing.

I tried AutoGPT and the like, but found the self-iteration not very helpful when tackling real world problems. GPT-4 gives enough step-by-step instructions to do anything I need it to do, without it being wired into my file system.

Prompt engineering seems to be a thing of the past with GPT-4 too, though you still need to give context, which "prompt engineering tools" don't help with.

Plugins are also pretty powerful. It seems to hallucinate a lot with Zapier, but it's still the best tool by far.


Agree on AutoGPT. It’s more useful for me to share what happened and ask follow up questions. Also sometimes it simply is bad at something, and putting it in an AutoGPT loop just makes things more confusing :). At that point I just use google.



I see chat very similarly to how I see spreadsheets - a computing paradigm for non programmers.

As such they provide a very specific abstraction for broad range of tasks. The moat for most prompt based tools is very small. These tools are similar to the moat of a spreadsheet template.

At the end of the day whether it’s a spreadsheet to track my mortgage or a tailored prompt, I typically want to fine tune it for just me.


Raycast AI (now Raycast Pro). It makes using GPT more accessible and user-friendly by bringing it right to where you need it. Also their take on AI Commands is quite useful. You can create commands with a custom prompt and the prompt will additionally contain the currently selected text.

Some example AI commands that are built in (you can of course create your own commands): - Improve Writing - Change Tone to Friendly / Confident / Professional / Casual - Fix Spelling and Grammar - Find Bugs in Code - Explain Code Step by Step - Explain This in Simple Terms

https://www.raycast.com/pro


Downside is that you can't select GPT4 for now - probably because users would associate the slowness with Raycast then.


Havana summarizes my sales calls and drafts a follow-up email automatically after each call.

https://tryhavana.com

When you do more than 5-10 sales calls a week, writing up notes and emails can take hours of your time.

It's tedious work but also must be done (otherwise you might forget what's going on in a deal when it's time to do another call down the line!).

Also, the quality of the summaries and emails must be good (clear, readable) but not necessarily great (we're not looking to win a Pulitzer here).

It's the perfect kind of task for GPT.


Can vouch for Havana. We use fathom for video recording but the AI summaries are unusable. Havana has filled that gap


Love this as a founder! Can see value in leveraging sales specific data to fine tune results.


Havana's summarization is actually pretty good - we're using at Pylon as well!


been using it for some time now since it’s release, and the results are better than anything else i’ve seen! (and it keeps getting better :D)

love it!


Looks pretty useful - will give it a try


This is awesome. Trying it out.


wow this looks really cool!


My favorite thing to use ChatGPT for is a personal tutor. I've been able, for my own intellectual interest, to educate myself about some topics in math and CS that have so far eluded me. The descriptions it generates are better than anything I've found elsewhere and if it's not clear I can say things like "explain the above for an audience with a background in computer programming" and it will re-attempt the explanation.

Rather ironically I'm using ChatGPT to teach me about AI, explaining concepts like tensors and attention layers. It's a great way to make sure I'm in good with Roko's Basilisk since my AI-generated immortal soul will be able to cite my ChatGPT log as proof that I helped bring the Basilisk into existence.


There are so many useful tools, that I keep an Awesome list up to date with openAI API, as well as open LLM tools. Especially the up to date list of open LLM models might be of interest to some, in case someone wants to be independent of OpenAI:

https://github.com/underlines/awesome-marketing-datascience/...


I don't like those lists... too much things. No idea what is actually usable / usefull in there.


I like the list, but the list name itself might be better!


For GPT/Copilot style help for pandas, in notebooks REPL flow (without needing to install plugins), I built sketch. I genuinely use it every-time I'm working on pandas dataframes for a quick one-off analysis. Just makes the iteration loop so much faster. (Specifically the `.sketch.howto`, anecdotally I actually don't use `.sketch.ask` anymore)

https://github.com/approximatelabs/sketch


We are building a programming language (https://lmql.ai), that allows you to execute programs with control-flow and constrained behavior on top of LLMs. Essentially imperative LLM text generation.


How does this compare to Guidance?


Not sure what they're using under the hood but so far kagi has the best summarizer I've used:

https://kagi.com/summarizer/index.html

Also +1 for ChatPDF - it's great!


I've been using a combination of https://labs.kagi.com/fastgpt and https://www.phind.com/ on a daily basis basically replacing all search engines.

Also a plug for a weekly AI-related digest: https://perprompt.com/


https://www.perplexity.ai/ Like ChatGPT but what sets it apart from other AI chatbots is its ability to display the source of the information it provides.


It can calculate at least https://i.imgur.com/MvXJa58.jpg


Even Bard shows the sources, pretty impressive.


My favorite GPT-powered tool is Todd, this guy at my work


Straight into my top 10 favourite HN comments of all time


I am more than a bit biased, but Heuristi.ca is my favorite tool for knowledge exploration using a mind-map like layout that uses the ChatGPT API.

http://heuristi.ca/


I built https://chatuml.com, a copilot for diagram, the main usecase is when you need to start a diagram but don't know where to start, you can just type out what you have in mind and let AI do the work:

> Alice give Bob 5 dollars. Bob give Fred 10 dollar. Fred buy a house. Alice rent Fred's house. Alice pay Fred 500 dollar....


This is kind of delightful. I frequently want to build diagrams like this but I thought the only way was to draw them by hand which is very time consuming so I don't like to do it. This makes it a lot easier and I can learn some UML in the process. Thanks for building it.


So far I created a small Tampermonkey script [0] to generate me [realistic] off-/on-water rowing workouts based on a prompt. I use it almost daily.

[0]: https://gist.github.com/alfanick/3ecac79f9590bae6819e410c338...


I'm still trying to find a good work flow.

But I like to ask Bing questions, if i have a simple thought of 'what does this mean' or 'how does this work' to get a high level overview of something that is not vital that I know everything, it's just something I heard and realised I had no idea about what it was/how it worked.

And i've started using heypi.com as a personal coach, just talking through anything I am feeling/struggling with and i'm really liking that at the moment. I've felt for the longest time I could do with someone to bounce ideas around or talk to about life and struggled to find someone that understood my nonsense and overthinking, AI seems to be a good job with it and i don't have to worry about being insecure with what I am talking about as it's just an AI.


> i don't have to worry about being insecure with what I am talking about as it's just an AI

I’d be uncomfortable if there were a digital transcript of my conversations with my therapist stored in some company’s servers, but I get that talking to a machine would be easier than a person for some people/topics.


Sindre Sorhus released this list:

https://github.com/sindresorhus/awesome-chatgpt


I've been experimenting with using ChatGPT for coding and building up some useful tooling.

The result is aider, which is a command-line tool that allows you to code with GPT-4 in the terminal. Ask GPT for features, improvements, or bug fixes and aider will directly apply the suggested changes to your source files. Each change is automatically committed to git with a descriptive commit message.

https://paul-gauthier.github.io/aider/

It helps to look at some chat transcripts, to get a sense of what it's like to actually code with GPT:

https://paul-gauthier.github.io/aider/examples/


This is awesome. I started messing around with it yesterday. I'm building a similar tool that interacts via Github issues and PRs, but being able to interact directly on the CLI is super cool. Some ideas (I may experiment with these on my own fork, and make a PR if I have success):

1. The main prompt is pretty long (https://github.com/paul-gauthier/aider/blob/main/aider/promp...), but I suspect that it could be abbreviated. I'm think the same criteria/restrictions could be enforced with fewer characters, enabling (slightly) larger files to be supported

E.g. "I want you to act as an expert software engineer and pair programmer." (69 chars) -> "Act as a software dev + pair programmer" (39 chars) - I suspect the LLM would behave similarly with these two phrases

2. It would be cool to be able to open "file sections" rather than entire files (still trying to figure out how to do this for my own tool). But the idea I had would be to support opening files like `path/to/main.go[123:200]` or `/path/to/main.go[func:FuncName]`

^the way I've gotten around this limitation with your tool is by copying functions/structures relevant to my request into a temporary file, then opening aider on the temporary file and making my request. Then I copy/paste the code generated to the actual file I want it in. It did a really good job, I'm just trying to figure out ways to reduce the number of manual steps for that

Anyway, great job, and I'm really excited to keep using this tool.

By the way, could you by any chance update the README with instructions for running directly from source rather than via pip install?


Sorry, just noticed this reply.

I have explored the "line ranges" concept you describe a fair amount. I took another run at it again last week. I still haven't unlocked a formulation that GPT-4 can reliably work with.

It's really hard to get GPT-4 to stop trying to edit parts of the files that we haven't shown it yet. It frequently hallucinates what's in the missing sections and then happily starts editing it.


Nice solution with the search and replace blocks to edit code. I spent quite awhile trying to get regular patch files for stuff to work consistently.

Then I switched to having it use sed which was better but still not totally consistent.

Can you add a license?


I just added Apache 2.0. Thanks for the reminder, I've been meaning to add a license.

I tried `sed` as well, every variant of `diff` output, various json structured formats, etc. The edit block syntax I chose is modeled after the conflict resolution markup you get from a failed `git merge`. I find it's very helpful to ask GPT to output in a format that is already in popular use. It has probably seen plenty of examples in its training data.

Asking GPT to output edits really only works ~reliably in GPT-4. Even then, my parser has to be permissive and go with the flow. GPT-3.5 was a lost cause for outputting any editing syntax -- it only worked reliably by outputting the entire modified source file. Which was slow and wasted the small context window.

Both 3.5 and 4 seem to be able to think more clearly about the actual coding task when you allow them to use an output format they're comfortable with. If you require them to output a bespoke format, or try too hard to ask for "minimal" edits... the quality of the actual code changes suffers. The models are used to seeing uninterrupted blocks of code in plain text, and so they seem to do best if you choose an output format that embraces that.


The Playground tool for ChatGPT 4 in the API documentation site.

It's like ChatGPT but gives you the option to change / edit and rerun prompts more effectively.


I have been using LLMs to automatically repair security vulnerabilities. I have a set of tools which build prompts describing the environment and the output of vulnerability scans. The tool then requests a shell script to disable/fix/update the vulnerability. The script is submitted as a PR which has actions that run integration tests. Human intervention is sometimes needed, but the focus is on better engineering of prompts (and by proxy tooling).

Describing the environment relies heavily on a CMDB, so this is not a one-size-fits-all approach and this is functioning entirely in my personal lab of ~100 servers. That said, ChatGPT has given me the best results compared to locally run LLMs


Just plain old bing.

Getting answers has replaced using search in about 80% of cases.

- how do i run a function when a value changes in svelte?

- how do i get the current tab id from inside a content script?

- what is the origin of the term 'use your illusion'?

- what is the average salary of a developer advocate in new york city?


MirrorThink is able to search and read scientific papers to answer serious scientific questions and find solutions to deep engineering problems.

It is also GPT-4 for free (for now).

https://mirrorthink.ai/


fyi, he self-identified as GPT-3, and did not seem to be "aware" that he was specialized in science, but in "a wide range of topics".

Edit: and I strongly suggest you create a "Delete your account" page or at least a way to contact the person responsible to do it :).


There's not a even a logout button yet :) Noted, you are right.

Yes interestingly GPT-4 identifies as GPT-3, and you need to be quite explicit about asking it to search for the answer in papers (see collapsed prompt).

But it is excellent at using the content of papers to ground itself, it is really hard to make it hallucinate or provide evidence for scientifically incorrect claims. And it is so much faster than searching for whole papers and reading through them.

We are focusing on the core value of giving it access to scientific knowledge right now, but we are working hard to mature everything surrounding it too.


Too bad the sign up process is broken. I get to the last step to verify and the button presses and nothing happens.


You should have gotten an email with the verification code. There's a spike in sign-ups right now so it fails occasionally, try a few times, it'll work.


I tried with a few papers. Didn't work. Always making up summaries.


Not sure if it's GPT-powered, but I now get all my news from http://newsminimalist.com

It's so calm and to the point, I'm never going back to anything else.


Interesting. Is there a way to shape it to topics you care about? It's all about Sudan and Ukraine, which while globally important I'm much more interested in local (Australian) news / tech news.


Hey, author here.

Yeah, the initial idea was to uncover globally significant news, but a lot of people are asking for ways to find "locally important" or "industry important" news. So I'm shifting my direction a little bit.

It's currently possible to filter news by broad category in a paid version (health, science, tech). I plan to expand the list of categories, so it's possible to go deeper. I also plan to add news in other languages (translated) and add country filters, so it's possible to see only news related to individual country/region.


Thanks for building this! One thing I'd love is for there to be fewer duplicates.

The top 6 stories at the moment are duplicates of a single story:

7.7 - Russia launches intense air attack on Kyiv, Ukraine claims to have shot down all 18 missiles.

7.1 - Series of explosions heard in Kyiv as Russia attacks.

7.1 - Massive Russian missile strike hits Kyiv in attempt to destroy Ukraine's new air defence systems.

6.9 - Kyiv targeted by dense Russian missile and drone attack.

6.9 - Russian drones and ballistic missiles attack Ukraine's capital after President Zelensky secures new arms pledges.

6.9 - Russia launches intense air attack on Kyiv with drones, cruise missiles, and possible ballistic missiles.


I'm not a paid user (yet?), but it seems on the pricing page that you can pay $10/mo for shaping topics to your prefs.


> It uses AI (ChatGPT-4) to read the top 1000 news every day and rank them by significance on a scale from 0 to 10 based on event magnitude, scale, potential, and source credibility.


Spronket accomplishes the same thing, but the ranking is way better

https://www.spronket.com/sharedConfig?shared-config=23924690


There's also https://www.boringreport.org/app which removes sensationalism from headlines.


Promptr is a coding assistant tool that allows you to ask GPT to produce or modify code, and the results will be automatically applied to your file system.

https://github.com/ferrislucas/promptr

From the README: Promptr is a CLI tool that makes it easy to apply GPT's code change recommendations with a single command. With Promptr, you can quickly refactor code, implement classes to pass tests, and experiment with LLMs. No more copying code from the ChatGPT window into your editor.



Ones that actually save me a lot of time I would otherwise spend googling:

plz-cli, a terminal copilot (not just an autocomplete - you can ask it to explain, refactor, or well - do anything), https://github.com/m1guelpf/plz-cli

Code GPT, a Visual Studio Code copilot, https://marketplace.visualstudio.com/items?itemName=DanielSa...


Its Bing Chat for me. Its free and I get GPT4. I don't expect this to last forever though. I haven't seen any ads and am wondering how long Microsoft will subsidize my daily habit


Short Circuit adds ChatGPT to Siri, really shows what Siri could be. https://shortcircuit.chat/


I wish there was an Android version


I've been really enjoying this add-on for Google Workspace. Plus, it's made by an irl friend of mine :)

https://gpt.space


I built op. An Excel + Jupyter Notebooks + GPT tool for working with python pandas dataframes.

https://www.opapp.io/


I have been using it as a code companion getting it to do some dirty work that I would have had to ask Google and wade through results and find a "similar example to what I'm trying to achieve, then having to apply it to my situation.

Instead I can describe my precise situation to ChatGTP and get something that is almost 95% ready to plug in straight to my code.

I will give an example.

I work on asp.net, heavy sql backend application. Sometimes while I am working on a big task, I skip a few things as I develop and hone in on my ultimate solution. Then I go back and tidy things up. I would sometimes mock data that I would have had to write onto real sql tables into temporary tables and at the end, I would go and turn those temp tables into real tables. ChatGPT has been very good at say turning those temporary (staging) work into real work

e.g. Hey ChatGTP, here's my settings table which I have defined as a temporary table, can you write me a script that turns this into the real table, and another script for the data import script

e.g. Hey chatGTP, I need to output this xml from this sql, can you have a go at turning this into something like this, here's the table schema I'm working with


My use case as well. Job requires us to be very careful about what info we give it so most of the time is spent crafting plausible but anonymized example inputs though.


I'm really interested in a tool that can ingest documentation (I have it in markdown and asciidoc) and help me write new documentation. Any suggestions?


How about a tool that ingests code (and tests) and creates documentation? Anything like this yet?


I'm working on something like this, but it's not ready yet. should be later this week.


I'll mention one which I didn't write, which I found pretty cool.

iOS app "Friday" lets you use to talk to ChatGPT. It seems to be just simple glue (I'm sure there are many similar ones) between speech recognition, GPT, and text-to-speech, but the end result is that when you're bored you can have fun discussions without typing.


I couldn't find it on the app store, can you post the link?


The GrammarlyGO LLM makes an already essential writing companion even better. I’m now drafting new content and expanding ideas all in GrammarlyGO. It even checks the grammar and style of the LLM output!

https://www.grammarly.com/grammarlygo


As someone with access to the Air Force Donovan LLM tools with Scale, I personally think their SKD (Sentient Killer Drones) tool is one to keep an eye on for the future. Imagine Sydney but she's mad at you and she knows where you live and she's embodied in a high tech drone packed with explosives.


Is this for real? They called it a Sentient Killer Drone?


Yes and it's confusing because there's a SKD SDK (sentient killer drone software development kit) to integrate with the larger platform that includes the telemetry, prioritized missions, battle space sensor network, and the LLL (life-long learning) module.


This is a spoof, referencing Criminal Minds.



Copilot, then https://www.phind.com/ and I am biased but the one that I am working on - https://saga.so/ai for integrated GPT inside notes and tasks.


As a developer who works with a lesser known ERP (Odoo, which famously has a terrible level of _good_ technical documentation online), PyCharm/co-pilot continues to blow my mind with how well it interprets my workflow when building out code.

It also does a relatively good job of writing unit tests.


I've been using the api in S-GPT [1]. An iOS shortcut that I use to primarily summarize the body text content of a webpage I'm on from the share-sheet. When it can fit the text in the context, I'm pretty happy with the the results.

It also supports sending it your clipboard contents when you launch it and it parses the words "clipboard". Good for when in a pinch.

I mapped some repetitive prompts to my text replacement on my phone that auto-expand when I type in the input field `*<some shortcut here>`

1: https://www.macstories.net/ios/introducing-s-gpt-a-shortcut-...


https://type.ai It has embedded GPT4 in a way that more natural for long form content.

Have tried about another 7 ai text generators/editors and so far is the best


I am running this tool to compare free and open LLMs:

https://www.gnod.com/search/ai

So far I know of 3:

- Phind

- Perplexity

- YouChat

If you know more, let me know and I'll add them.


I am on https://www.phind.com the whole day: it's everything I need

but I know nothing about who made it, can any of you help?



thanks


DankGPT is able to draw context from a library of documents (textbook, papers, class slides) to explain any topic and answer complicated reasoning problems.

It’s very similar to ChatPDF, but you can include multiple documents and it has much better context selection. This leads to better answers in practice (less “the source does not contain information on…” and hallucinations)

https://dankgpt.com


I am dogfooding the tool I made for cold emails cold Nureply. Basically it helps me to do marketing for Nureply by using Nureply.

I am mostly a technical person and not the best for selling/marketing. So I built Nureply to help me meet with potential customers and learn from them directly.

Take a look at here -> https://nureply.com


TranscribeMe is my favourite (https://transcribeme.app); it transcribes voice notes from WhatsApp and Telegram. Not only that, if you have a long voice note, the bot summarizes it with an AI. You can also add ChatGPT to those message apps


Not necessarily OpenAI GPT powered, but local LLMs have gotten pretty good over the past few weeks. I am most interested in using an LLM for automation with tools like AutoGPT and Godmode.space.

I prefer using an LLM locally if possible since it gives me more control and I don't have to worry about the additional cost or OpenAIs infrastructure being under load.


As a continuation to this thread, for anyone interested, here's the link to a Discord community for developing and sharing GPT/AI tools to enhance everyday life:

https://discord.gg/579renpEPn


Besides ChatGPT, I'm getting enough added value from CoPilot and Grammarly to make them worth paying for.

https://www.perplexity.ai/ is favourite "search" tool (ironically beating GPT enhanced Bing) for outright speed and quality of results.


Thanks for this. I just tried it and I am impressed. Bookmarked.


I think ChatPDF is worth to be mentioned, but I haven't had a chance to use this tool yet:

https://www.chatpdf.com/

https://news.ycombinator.com/item?id=35626312


I’ve been getting great results from Notion AI. I like how the LLM is integrated into an already powerful knowledge management tool. It makes it easier to iterate on ideas and learn new concepts.

https://www.notion.so/product/ai


News Minimalist:

> It uses AI (ChatGPT-4) to read the top 1000 news every day and rank them by significance on a scale from 0 to 10 based on event magnitude, scale, potential, and source credibility.

https://www.newsminimalist.com/


Most of the articles in top 10 seem to be about Ukraine. I understand the war is significant, but 5 articles in Top 10 ?

3 articles in Top 20 about Israel ?

They should work on how exactly is the significance determined, and significant for whom ?


Author here. I plan to add a "how it works" section, but the basic idea is that each news story is rated on several parameters:

- scale is the number of people affected by the event described in the news story. - magnitude is the strength of the effect. - potential is the likeliness of the event to lead to other, more significant events. - source credibility considers how trustworthy is the source, and what is its track record.

Then these parameters are combined into a single score.

Also fair criticism re: repeats. I plan to solve this by clustering similar news, so one event is only given one title.


Significance is obviously subjective, but you can definitely make an argument that "major world power is trying to take over a country" is significant. It's been going on for a while which makes it harder to feel like it's so important, but it really is.


Hey, author here.

That was the initial idea. Significant events don't stop being significant once we get tired of hearing about them.

But there's definitely a problem of duplicates. When separate news sites post about a similar event it get rated relatively similar by ChatGPT, which creates clusters like we see today.

I want to solve this soon by combining similar stories into single block with one title.


That's another problem I had noticed from a bit of a play (similar problem with boringreport), so I'm glad you're already on it! I've signed up for premium.


Thank you so much! Hope to fix it soon.


Currently I love using https://imagetocaption.ai for my social media posts. Creates nice captions for my photography page out of my images.


Not using AI tools too much at the moment, but probably ChatGPT is my number 1 tool right now. Playing around with the OpenAI API at the moment to launch some of my own micro projects soon though.


I'd like to recommend ekatra.one, a GPT-powered education platform designed for under-served learners, especially in India. Helps with personalized learning experiences and uses WhatsApp for course delivery.


If you have an OpenAI key, and want to share access with someone else, or enjoy having a joint AI role-playing session:

https://havewords.ai/


I'm working on a tool which creates git repos and PRs from instructions: https://www.gitwit.dev/


I've found Bard to be extremely useful, more so than ChatGPT. It's internet capabilities is a small but super important change.

You can pass it a URL and perform actions on the webpage.


Definitely Notion-AI (technology powered by OpenAI). It replaces other tools, including Writesonic, Wordtune, etc.

Additionally, OpenAI Chat is a useful tool for day-to-day tasks.


I’m using the Bing one at the moment to write background on articles. And it helps me generate D&D character back stories and campaign plots.


shell_gpt is pretty neat just in terms of having ChatGPT up and running in your console very quickly, and being able to customize the role. Using ChatGPT in the shell has been a really good experiences, cutting down on the distractions of using a browser and google search results a lot. But I think there is space for an even better and more feature-complete shell application for ChatGPT.


Opencommit - analyses git changes and writes a detailed commit message.


I use Poe [1] by Quora; It's free on Web.

1. https://poe.com


I use code GPT in VS Code and Intellij. And GPT for docs and slides in Google docs.


> Intellij

What? how?



This plugin: https://plugins.jetbrains.com/plugin/21056-codegpt

Works great. I have an openai api key and I've configured the plugin to use gpt-4.


> Which ones are actually worth using?

None


+1


For me, in terms of ones I’ve used 10 times or more during the past week:

Chatgpt

An iPhone app to use gpt-4

An ai newsletter that sends me new tools (I’ve found lots of cool tools from it but none that I use regularly)

I think that’s it? Kinda surprising. There are so many gpt powered products I’ve tried but none I’ve stuck with.


Dude!! URLs please.


I run this newsletter on things like business ideas, app launches, langchain deep dives, and other ai newsworthy things: https://curl.beehiiv.com


He's probably talking about Matt Wolfe's newsletter and AI tools site: https://www.futuretools.io/


3.5 does indeed suck.

4 is good but you still need to use it properly


And is GPT replacing google search for folks?


Copilot and kagi.com‘s quick answers.


botphobci.com to turn boring mono-style text into powerful, passionate human-like text.


News Minimalist. Major time savor.


https://www.you-tldr.com/ is one I’ve been coming back to, though I don’t yet use these tools much. This particular one is handy for when I want to find out what a video with a click-baity title/thumbnail is actually about, without having to start to watch it. It lets me satisfy that curiosity with a quick a paragraph summary instead.


I've been using a tool that I've been developing myself (with GPT assitance lol) to basically monitor issues in a list of Github repositories, make the requested changes, and open a pull request to close the issue. It can also monitor comments on a pull request, and classify them as "question" (respond with answer) or "request" (update PR with requested changes and respond with additional info).

It's not perfect but it's at a point now where it is allowing me to make significant progress on personal projects that I would not otherwise have time to do. I already sit in front of a computer at work all day. I want to minimize doing that outside of work, but I still have lots of code projects I want to do on the side.

Main repo: https://github.com/mobyvb/pull-pal

Examples:

- Drafting an action plan and coming up with open questions based on specific requirements (issue: https://github.com/mobyvb/download-simulator-2023/issues/1, PR created by bot: https://github.com/mobyvb/download-simulator-2023/pull/2/fil...)

- See also asking the bot to write code based on a step in the generated action plan: https://github.com/mobyvb/download-simulator-2023/issues/3 and PR https://github.com/mobyvb/download-simulator-2023/pull/4. In this PR, pay attention to the comments I left; the bot takes feedback from the comments and will update the code accordingly, allowing you to iterate on a single PR before merging code

- Basic updates to existing HTML file (issue: https://github.com/mobyvb/pull-pal/issues/4, PR: https://github.com/mobyvb/pull-pal/pull/5/files)

- Writing an Arduino script from scratch based on specific requirements (issue: https://github.com/mobyvb/midi-looper/issues/1, PR: https://github.com/mobyvb/midi-looper/pull/2/files)

Still lots of improvement to go but I'm having a lot of fun.

Exposition if you want:

My experience using GPT4 for programming has been pretty fun. First I was experimenting with prompts like "Given <this code>, how do I accomplish <this task>?" I have been discovering things about what types of tasks it's particularly good at (generating action plans, breaking tasks down into subtasks, writing simple-ish code). The code is usually stuff I could figure out myself, but it's still (sometimes) more efficient than me trying to write it, then do some google searching, update it, etc... The chat format also makes iterative improvement fairly easy, e.g. "you forgot to use <some variable>" or "please add comments explaining what the code does" or "replace this text in the html with some generated content about a digital assistant tool"

I'm a manager and team lead so I find myself writing a lot of tickets based on high-level product requirements for my team to work on. Because I am very familiar with the code base, I often provide a lot of technical detail, e.g. providing links to specific files, functions, and PRs relevant to the issue. I found that prompting GPT4 with a similar level of detail resulted in success. However, it was still really good at more general tasks.

An example of a task that GPT performs pretty well at: "write an index.html landing page with a content section that is vertically and horizontally centered using flexbox. In the content section, generate a heading and paragraph talking about an AI-powered digital assistant for programmers. Add some basic styling to the page, with soft colors. The font of the heading and paragraph should be different. Serve index.html from a main.go file on port 8080. Also add an endpoint to the server at POST /api/number which returns a random integer between 14 and 37. In index.html, add a button that calls this endpoint and displays the number on the page"

(GPT4 can handle this prompt easily with no errors in the code; GPT3 will struggle, so it needs to be broken down more)

I could do all of the stuff in that example myself. But so can AI. I prefer to write out what I want and get some code that's usually 90%-100% perfect, make some slight modifications, maybe ask for some different color options, etc.. Point is, that's probably 30 mins saved for the same end result (honestly, better, considering my design skills are nonexistent).

I work a lot with Github repos already, so it was straightforward to me to replace the ChatGPT interface with a "Github interface" which I'm already familiar with and like (open issue, reference issue from PR, merge PR, issue auto closed). I also like being able to iterate on a "pending" change in a PR by leaving comments before merge. Also, I can do it from my phone!

To see the specific prompts this tool is using as the foundation (at the moment), see

* https://github.com/mobyvb/pull-pal/blob/main/llm/prompts/cod...

* https://github.com/mobyvb/pull-pal/blob/main/llm/prompts/com...

If you have read this far, I hope it sounds interesting to you. The tool is GPL licensed, and I would love if other people tried it out so that I can get feedback on the best improvements to make/bugs to fix.


.


ChatGPT. No point in adding an expensive middleman on top that’s usually just a few clever prompt tricks.

Side note: having gotten access to Copilot Chat, it’s disturbing how quickly the ChatGPT UI has become established in my mind as the standard. Copilot Chat, despite being integrated into VS Code, feels clunky and alien compared to ChatGPT in a separate window. Funny how fast new things become the standard by which others are measured.


Same for me. I think this will only be more true once I am using plugins as standard.


[flagged]


This look really cool. What format does it store files in?


Just a JSON blob in firebase. I am looking into making an electron app, so the data stays local:

https://github.com/egonSchiele/chisel/tree/adit/electron


[flagged]


You're using a language model to give retail investors that don't know any better financial advice? This is a horrendous idea that will probably get you sued if you act like it's an investment strategy. Financial advice, like legal advice, is not something you can trust an AI to give out as if it's complete advice. There's a reason you need a license for this.


Oh wow. Every sentence in your comment just makes it progressively worse.

What you’re creating is outright dangerous if not actively malicious. You’re targeting inexperienced retail investors, and giving them financial advice that you’ve pulled out of an LLMs arse.

Somebody is going to get hurt, and you are without going to get sued over it. It’s not an if, but when.

EDIT: s/ u/ i/


I’m sure they have a ‘this is not financial advice’ notice, which is likely to be about as effective as those offered by the finfluencers currently being targeted by the Australian Tax Office.


I'd have a bit more (still not much) sympathy for them if they hadn't said they're actively targeting inexperienced investors. That's just inevitably going to fuck someone over. And yeah, a notice like that is about as valid as a warranty-void-if-removed sticker.

I took a look at their site, and it seems like they're pushing NFTs as well, and although their site doesn't seem to mention blockchain, their investor pitch deck does seem to. So many red flags.

It's times like these I think that maybe software engineers should need to be professionally licensed.


I'll try for some more constructive input: This is a good idea, if you properly couch it. Just as an investor must understand they can lose money when investing, they must understand this sort of input won't always be entirely accurate (much like the evaluation by a real human wouldn't be). A well informed human being gets their information from many sources.

If you can manage to get your users to actually understand this, I think this is a very cool idea. Following a giant stream of news about your big stocks is tedious as hell and anything that can reduce that workload is a good idea.


> A well informed human being gets their information from many sources

They're actively targeting uninformed, inexperienced investors though.


Yes, and the way to solve them having no info is to give them an easy start and guide them to next steps they can actually work with. In my opinion it is not useful to take a service that seeks to solve something and bury them in "you will get sued" comments. It's hateful and won't lead anywhere good.


I don't care how "hateful" I'm being. Making a LLM do stock picks and then telling people who don't know any better that it's a reliable way to invest money is something that will cause someone to lose their savings. It's not just about them getting sued - it's about them causing gigantic harm to someone's actual life. Plus, if you take a look at their site, it's clear they're also involved with web3/crypto nonsense, which has stolen quite enough money from honest people already.


I’m being blunt, not hateful. Not everything needs to be wrapped in nice words and pleasantries for the sake of it.

I find it hard to be constructive toward a project that I strongly feel should not exist and whose existence I believe will be actively harmful.


[flagged]


Account is 57 minutes old, solely being used to spam generic summarizing tool


Hey , yes it’s a new account.

Actually, this is first time I could have shared something after a year of consuming.

About the product I like to know more about generic summarising tool.

Currently the strategy we are on is, it’s first product that we have built so let’s ship and engage with actual users, probably that’s reason for little generic-ness.

Future thing we are exploring is something like hook your data and build bot with authentication, payment and share it with custom domain.


Yuck. I was interested in the intention of this thread, but it’s pretty obviously become “advertise your own tool.”


Please, the title is a bit misleading. GPT can be many things, including "General Purpose Technology". The post is about the product ChatGPT.

https://en.wikipedia.org/wiki/General-purpose_technology


He is correctly referring to GTP als the underlying language model that is also used by ChatGTP.

https://en.wikipedia.org/wiki/GPT-3 https://en.wikipedia.org/wiki/GPT-4


I think it's OK for there to be multiple definitions for a TLA (Three Letter Acronym, but also Tennessee Library Association, Temporal Logic of Actions and quite a few others.)

I had genuinely never seen GPT used for General Purpose Technology until I followed your link to Wikipedia.


I don't see a point to fighting against the current, it begs the question: what could possibly be achieved?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: