Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What tech is under the radar with all attention on ChatGPT etc.?
147 points by vinaypai on May 2, 2023 | hide | past | favorite | 168 comments
ChatGPT and other generative AI seems to be taking a lions share of mindspace in the tech industry right now.

I'm curious to hear what interesting new things people are seeing that AREN'T trendy right now (yet?!).




Something I'm personally excited about: colour e-ink displays. There are a few decent colour e-ink tablets on the market already, for surprisingly low prices ($400-500). So far the colour reproduction and refresh rate is still somewhat suboptimal, but development appears to be progressing rapidly. I predict that in a few years there will be little reason to get a conventional tablet over an e-ink one, at least for the “reading and taking notes” use case.


I agree with this and would even go as far as to say color E-INK screens are basically ready to push out greyscale ones right now. The problem is that no one is making a device with them that's any good. I owned one of the boox color e-ink devices for like a week and the experience was awful because it's a full android os shoehorned into something that kind of works on an e-ink display.


High enough quality ones could finally make digital photo frames practical.


> There are a few decent colour e-ink tablets on the market already, for surprisingly low prices ($400-500).

Which ones are you referring to? Are they complete and ready to ship products? Or is that a DIY project / kickstarter?


Yes, for example this one: https://onyxboox.com/boox_novaairc


Bought one of the monochrome Boox readers a while back and love it—the Google Play Books compatibility is what sold me.

I thought about splurging for a full-color model but wasn't sure whether the tech was there yet. Do you think it's worth the upgrade or should I hold off a little longer?


I compared the two in a store and color was not good. IMO if you want color just get a real tablet. Color e ink is the worst of LCD and e ink. It’s slow and ugly and eats battery life and hurts your eyes in the dark.


Commodity RISCV chips. Some of these have just entered mass production, such as the CH32V003 (10 cents each in 1k quantities).

https://github.com/cnlohr/ch32v003fun

Fully open source stack!

Grab the below eval board and peck around: https://www.aliexpress.com/item/1005005269690018.html

Also see here for awesome writeup on the BL602 chip using the Nuttx RTOS: https://lupyuen.github.io/articles/nuttx


What is special about RISC-V? Why would I want a RISC-V chip over something like an ESP32 or PICO?


>What is special about RISC-V?

It is the best ISA around, and it is an open standard with wide industry support.

>over something like an ESP32

Espressif has stated intend to transition to RISC-V entirely. Newer ESP32 like the C3 and C6 are already based on RISC-V.


so who's stock should i buy here??


The one I linked is a good starter, it's by the manufacturer and has the programmer, dev board, and extra chips. They have other listings too of other variants, such as one with PHY for ethernet connectivity and much more GPIO and compute.


WebGPU. It’ll be live in Chromium stable this Thursday and will be the biggest step change for browser-based GPU since the original WebGL.


WebGPU is all the rage right now for hosting AI models in browser :)


This project is doing LLMs of all things with WebGPU too!

https://mlc.ai/web-llm/


Are WebGPU "libraries" at a point where instead of being recommended Python + PyTorch for neural network training (then exporting the model to ONNX and turning around and loading it and using it)? If not, how far? I've had trouble understanding why the PyTorch "GPU" (CUDA, MPS, etc.) code can't convert to WASM + WebGPU. Something has to be missing/it has to be a huge effort, but in theory you're supposed to be able to do GPU calculation with WebGPU, right?

Seems like the ecosystem is very early/non-existent.


Yes, it's definitely lagging the CUDA ecosystem. I did a write-up a few weeks back on the various approaches people are taking to do inference in the browser (WASM, WebGL, WebGPU): https://digest.browsertech.com/archive/browsertech-digest-th...

Matt Rickard also had a short write-up recently: https://matt-rickard.com/implementing-llms-in-the-browser

My take is that we probably won't see much training done on WebGPU, because training is done upfront and it makes sense to standardize on a single GPU vendor and use an interface that can squeeze all the juice out of those GPUs (CUDA). But for inference and run-time computations, it could be very interesting to take a model trained with CUDA/PyTorch and export it (maybe with Apache TVM or tensorflow.js) into WebGPU that can run on end-user devices.


> But for inference and run-time computations, it could be very interesting to take a model trained with CUDA/PyTorch and export it (maybe with Apache TVM or tensorflow.js) into WebGPU that can run on end-user devices.

In its current state, can you train on PyTorch, export to ONNX, load ONNX in JavaScript/WASM, then use it for WebGPU inference?

I'm not trying to sound obsessed/married to ONNX, I just though it was "the standard". Curious to learn alternatives/what people are doing now but I fear even talking about what might being done is discussing "bleeding edge".

Edit: A quick Google shows yes https://onnxruntime.ai/docs/tutorials/web/


> In its current state, can you train on PyTorch, export to ONNX, load ONNX in JavaScript/WASM, then use it for WebGPU inference?

I believe so. Onnxruntime very recently merged a WebGPU backend: https://news.ycombinator.com/item?id=35694553

You can also go directly from PyTorch to WebGPU with Apache TVM. (ONNX is also supported, but my understanding is that it's better to go direct). This is an example using an LLM trained with PyTorch (I think) and run in the browser: https://mlc.ai/web-llm/


I can't seem to figure if the PR for the WebGPU backend for onnxruntime is supposed to land in a 1.14 release, a 1.15 release, has already landed, isn't yet scheduled to land, etc? https://github.com/microsoft/onnxruntime/pull/14579

https://github.com/microsoft/onnxruntime/releases I don't see it in any releases yet?

https://github.com/microsoft/onnxruntime/milestone/4 I don't see it in the upcoming milestone.

I don't see any examples or docs that go with it

https://github.com/microsoft/onnxruntime/wiki/Upcoming-Relea... This seems to be out of date

https://github.com/microsoft/onnxruntime/tree/rel-1.15.0 I do see the js/webgpu work merged into here so I guess it'll be released in 1.15.0

https://onnxruntime.ai/docs/reference/releases-servicing.htm...

> Official releases of ONNX Runtime are managed by the core ONNX Runtime team. A new release is published approximately every quarter, and the upcoming roadmap can be found here.

ONNX Runtime v1.14.0 was Feb 10th


It can be converted, and can be fairly fast, see Tensorflow.js or https://jott.live/markdown/m1_webgpu_perf. However the any “web” standard will always be a subset of what the hardware can truly provide because it has to support ALL GPU vendors. For example leveraging Nvidia Tensor Cores or Apple’s neural accelerator is not possible. For ML training and inference this means that any webGPU implementation is at least ~3x slower (likely much more) vs an optimized CUDA implementation.


> Web Shading Language (WSL)

The WSL in the article was throwing me off.


That's not necessarily true, WebGL for example supports extensions.


Correct, but I don’t see Firefox or Chrome bothering to add tensor core support.


Excited to see WebGPU support for this become stable with three.js! They've been working on it in a beta-state, but I foresee huge perf improvements once it's widely accepted on the major browsers.


Do you see folks continuing to build on top of three.js? It looks like Rust game engines like Bevy and Fyrox are set to deliver comprehensive functionality to the browser over WASM/WebGPU.


That’s cool there are WASM enabled Rust engines! I’ll be honest, before you mentioned them, I’d never even heard of Bevy or Fyrox before.

As for people continuing to use three.js, yeah, I still see a lot of folks using it to build games. That whole “js everywhere” still seems to be really popular with a lot of folks.

It probably also helps that new comp sci students learning 3D graphics are often now taught using three.js instead of C++ like when I was a student.


It's directly linked to buzz around ChatGPT


Why is WebGPU a thing rather than just exposing Vulkan in the browser?



1. lots of innovation in ICE engines is getting close to production. Cleaner, more efficient etc. This is important even while the industry is moving to EVs.

2. Bambu Labs leapfrogged prosumer 3D printing by 5-10 years. Their printers are becoming the de-facto standard. This will enable all sorts of startups for the next decade.

3. Most iPhones now have a 3D scanner + AI-aided software to accurately scan your body and generate designs is becoming more mature. See also 2.

4. material sciences both new discoveries and new applications for 50 years old discoveries


Bambu leaped by 5 months at most. Everyone has high speed options now, with Prusa being notably last to ship. Speed isn't the only important factor, don't feed the Boat race.


Can you name some of the innovation happening in ICE vehicles? Just some terms I can search for would be appreciated. Thanks!



I wish that was "under the radar" instead of "subject of a giant embarrassing marketing push"


South Ameria has been using bio ethanol as a fuel for a while now, it's cheaper and uses crops, there are projects to bring that to North Amerixa and the rest of the world!


Doesn’t bio ethanol take more fossil fuel energy to create than it replaces?


https://en.m.wikipedia.org/wiki/Ethanol_fuel_in_Brazil

I'm not very well versed in the subject but this should point you in the right direction!


> for average conditions one unit of fossil-fuel energy is required to create 8.3 energy units from the resulting ethanol

It seems they’re in a unique position because sugar cane grows there, is used in industry, and the leftovers that aren’t usable for anything else can be turned into fuel.

I had no idea Brazil has a unique bio ethanol industry, thanks for sharing.


Now that the NFT and Meta clown cars have crashed, metaverse work is quietly plugging along. It's a niche, but it may be a Roblox/Fortnite sized niche.


> Meta clown cars

Metaverse is merely premature. AI is the bigger and more general tech wave, but Metaverse will start to crest in 5 years or so.

Think about all the immersive, interactive fiction powered by LLMs and AI. That's just scratching the service of what's possible.

Games, movies, narrative, and social will converge.


On the other hand, no they won't. People with nothing to write about talked about games and movies converging for years and they never even came close.

The closest you could say was games have a narrative woven in. Movies are passive and everything is intentional. Games are fundamentally about interactivity. They are practically opposite forms of entertainment.

I don't even know what you mean by 'social will converge'.


Oh hey! You're the person who was telling me half a year ago that AI porn wouldn't be a thing [1, 2]. I'm wholly confident you're just as wrong about this, and I'd even put my money on it [3].

Here's my prediction:

- Stories, narratives, characters, and game loops will become generative. Their capacity to generate new unseen and unwritten content will be heralded as revolutionary.

- The quality of games and GenAI will soon reach that of Hollywood films and take on a cinematic feeling.

- Players or viewers will be able to actively participate in the unfolding world (games) or sit back and watch them unfold (movies). This will be a spectrum (and not just on a single dimension).

- Narrative instances will be passive, single player, multi player, and massively multiplayer. You can sit back and watch a world unfold with your friends, or each participate as your own individual character that is an immersive and key part of the world. Cooperatively or in competition with one another, and at every point within that that spectrum. This will be similar to D&D, but at a scale never before seen. Wholly engrossing.

- Just like we've seen inklings of with YouTube and TikTok, you'll be able to remix the narratives you construct and participate in. You'll be able to world build, tweak, and share. You'll be able to take off the shelf situations and narratives and customize them as much as you want. Your previous "play through" or "stage time" can be recorded as its own personality that can become an AI agent in someone else's journey.

- This world building scene will be full of creators, musicians, directors, actors, and lots of people that just want to experience and hang out.

- Just as before, I'll throw porn into this prediction too. GenAI porn will similarly replace the manually filmed artifact, and moreover, it'll be interactive. Some people will even fall in love with their artificial companion(s). As this becomes more normalized, these adult AI companions may even find themselves injected into otherwise family friendly narratives. What you do with them at home is your own business.

Before you say it's implausible, I know for a fact that 3d-guided diffusion works and that it'll soon spook the entire Hollywood industry to the core.

[1] https://news.ycombinator.com/item?id=32711614

[2] https://news.ycombinator.com/item?id=35523656

[3] effectively I have. I quit my $400k/yr job to go full time on this and poured over half a million of my personal funds (so far) into building my company


telling me half a year ago that AI porn wouldn't be a thing

No, I told you it wouldn't devastate an industry built on hosting videos. It hasn't and those video hosting sites already host videos that are animated, because they don't care how their stuff is generated if people want to watch it.

Go back and re-read what you linked with a little more focus this time.

poured over half a million of my personal funds (so far) into building my company

And how many people are paying for your 'generative game loops' (whatever that means)?


Ignoring timelines, I’d say every single one of your predictions will become a reality. I think for a lot of folks, these integrations of AI into existing things seems natural, intuitive and common-sensical. I’m not sure why there’s such a mass of folks denying any of these things.

I’ve considered quitting my job and pursuing it full-time. Being an early adopter here seems like a sure fire way to succeed. I’m curious what your journey has been. I’m launching my product next week. Have a single customer signed on with a plan to scale it out, should the results come back positive.


Wait... what? You've invested half a million into AI porn? Seriously!?

I'm morbidly curious: how do you plan to recoup your costs? I assume porn is so trivial to get for free that it must be a relatively low profit industry...


No, I'm not building porn! (Not that there's anything wrong with those that are.) I conflated GenAI as a whole and I see how my post is interpreted now. My mistake.

I always think of the oft-quoted meme about porn being the canary in the coal mine for the adoption of new tech: film, VHS, usenet, web, P2P, Twitter/Tumblr/Reddit, streaming, VR, AI... I think the trend has held pretty well historically.

NovelAI, Civit, etc. are making a lot of money on it, and they're leading indicators of more expansive "SFW" use cases that will follow.


Are you hiring


I'm curious about learning more about this as well! I've had a long gestating narrative / world planned exactly for this kind of technology. But haven't been sure where to look next to bring it to life.


would love if you could share a bit about your previous $400k/yr job. What market/sector niche/role was it if anything you can share about it, really curious!


> Roblox/Fortnite sized

every 10 year old in N. America is on those and/or minecraft. no way the metaverse has that kind of reach anytime soon.


Is anything actually exciting happening in the space, though?


Roblox are doing steady incremental progress to their platform/engine.

Epic released Unreal Engine for Fortnite about one month ago.

Meta have some interesting stuff in their pipe.

I think that especially Roblox will shift how the younger generation view games, with more emphasis on social. It is their MMOs.


Is Roblox really considered the metaverse? I feel like it's just an MMO with a game editor. Not that different than many other games with custom game editors. It feels like a stretch to call this a paradigm shift.


Well, Roblox is almost as metaverse as any metaverse. Metaverse is just a buzzword for MMO's with level editors, that are also supposedly selling land that's worth something, a good place for a corporate themepark and having business meetings with uncanny 3D avatars that sound like your colleagues.


At least my understanding was that popular definitions of metaverse include some type of “reality-based” technology like AR or VR.


Roblox supports VR.


Is it really much of a shift? Last I checked, Minecraft servers had done pretty well with the generation prior.


Meta released a near photorealistic avatar demo recently. So they’re definitely progressing still.

In the hardware space openBCI has partnered with Valve and they’re working on an BCI integrated immersive headset called Galea. It runs about $25k+ atm. They’ve begun shipping to researchers and developers and I’ve heard they’re aiming for a consumer line but don’t know of a planned release date.


It already is, there's a market for game skins, for example for CSGO. Don't see anything else on the horizon.


will webxr ever be part of the revolution or is just going all native os like apps did on mobile.


We've all pretty much forgotten about AR/VR

But Apple's going to be launching their headset next month


It seems rather controversial within Apple itself: https://www.macrumors.com/2023/03/26/apple-employees-concern...


I’m kinda of excited about the battery be wired and not on the head unit. I’m hoping Apples is the lightest weight set to date. I never found the HoloLens comfortable. The quest2 would start out okay, but the weight fatigue was real.


I think most people have realized it's gimmicky and will be for a very long time.


I'm really interested to see how they differentiate their headset from HoloLens. Exciting stuff.


i wish webxr was at the forefront but i see it going the way of pwas.


Apple should remove the screen from the next iphone and force users to buy the headset to be able to see the display. Similar to how they removed the 3.5mm jack.


The next iphone will be invisible and weigh nothing. People will lineup for days to be the first who gets an empty box.


we drove by an apple store after the last big launch...my 6 year olds were like "why are they waiting in line? don't they know they can order it online?"


One of the questions I've been asking is we've basically had physical servers -> VM -> cloud -> container orchestration on about a decade long cadence. So what's next--especially considering you can argue a lot of the low-level architectural things haven't really changed during that period. But, then, perhaps architectural details don't matter a lot at this point relative to things like AI/ML. Even WASM is mostly in the weeds.


Physical servers (or devices) are next, obviously!

IT is cyclical.


Clearly riscv servers you design from the CPU up (with some very helpful software). Only to be replaced by the N types of computers that typically wind up being designed somewhere around years 3-5. To later be replaced by those same systems in the cloud.

Or of course world war 3 happens and the latest, greatest thing going in IT is “I mean I’m still alive, whatever that’s worth.” And making anything usable from whatever is found. Even a blinking LED becomes either neat or a way for a roving warboy to find/kill you.


Speaking of cyclical IT, my estimate is going “forward” to the terminal/mainframe-esque concept of simple and cheap “terminals” aka laptops that you use to log into a much beefier “mainframe” aka cloud workstation. I’m at a FAANGMULA and I’ve been using a cloud workstation from my cheap laptop as my daily driver for months now. The VM is so much higher-spec’d than what they would have bought me for a desktop. Just don’t look at the price if you plan to run it 24/7…


And for the mostly browser-based stuff I do, outside of multimedia and some development, I'm pretty much on a terminal (except we call it a browser) which I run from 2015-vintage Macs.


> FAANGMULA

killin' me smalls.


> FAANGMULA

Only somebody who does not work at FAANG uses FAANGMULA, so why not just go with MULA.


They already are: with these on-prem versions of the common public clouds.


Just recently moved all my services that don't need a 100% to a dedicated server instead of several cloud VMS running containers. It's cheaper, less overhead, simpler to configure and works just as well (so far)


Wouldn't surprise me if RPi style single board computers do start making headway as "microservers".

Instead of renting a VM or running docker containers you rent a small physical server for each service.


“Locally hosted cloud infrastructure!”


Greenlake?


The trendy (meaning both that it is hyped and that it follows the trajectory of the trend you identified) answer is serverless/scale-to-zero (Lambda, Cloud Run, etc.).

That said, Lambda has been out for a while and seems to have avoided taking over the world. Generic vCPU-hours are so cheap now though that it isn't compelling from a a cost perspective (serverless will either cost you more at the high end of the scale, or else save you a few bucks a month on your idle instances at the low end). Also, the developer experience isn't as good yet IMO. Thinks like LocalStack help, but it's still not natural-feeling to deploy a big application this way. (scale-to-zero it's great for small side projects though -- I have a few apps in Cloud Run that cost me like a nickel a month, a few dollars if I have a good month in terms of traffic).


This is a fantastic take (seriously, no sarcasm here). I'll add that serverless platforms have not actually shown the ability to scale _efficiently_. It feels like every software architect has been talking about the 2023 March post^ from Amazon Prime Video on migrating from serverless to serverful. That is a direct response to the inability of serverless to scale because under the hood it is serverful but limited. Besides AWS Lambda and Google Cloud Run others have tried OpenFaaS and Cloudflare Workers. I think only Cloudflare Workers have "scaled" but that's because it's limited scale for each service at the edge with lots of edge devices.

I do not know if this is solvable: a permanent service with light autoscaling handles even inconsistent load so well and reduces operational complexity so much that I don't know if serverless will be anything more than toys. It is no coincidence that AWS Lambda's first language was a frontend language* (Node.js). Those use cases have low to moderate scale (because after that you split frontend and backend).

^Amazon Prime Video post: https://www.primevideotech.com/video-streaming/scaling-up-th.... *First mention of AWS Lambda languages in AWS Compute Blog only mentioned nodejs: https://aws.amazon.com/blogs/compute/container-reuse-in-lamb...


I agree with some of the points you've mentioned, but I also think that there definitely are great use-cases for AWS Lambda, and that they are (especially when using a good deployment tool) a great option for many different types of applications.

The biggest problem is that the computing model, or the architecture that is commonly required for this sort of apps (event-driven and asynchronous) is hard to develop, manage and reason about (especially with the async-lambda integrations that by default retry their work if they fail). This means that you need to think about them and architect them in a way that makes them idempotent.

If you configure everything properly, event-driven, AWS Lambda-based apps can be extremely useful. I just don't believe that the complexity required to add them to your application is worth the complexity that needs to be added to your app to make it all work.

What I wanted to say is that Lambda functions have their use-cases, and that if used reasonably, they can heavily simplify the application that they are used within.

Our product (https://stacktape.com, disclosure: Im a founder), makes it easy to deploy both container-based (ECS Fargate and ECS EC2) apps, and lambda functions, with almost no changes in the configuration). This allows our users to always choose the right technology/computing model for the use-case they are working on.


And serverless isn't even all that distinct from containers, see e.g. Knative. You're mostly still doing stuff around a Unix process model.


I'm excited by the idea of WASM being 'the thing' because it basically boils down to passing raw executables around, and running them in trusted contexts. It's the ultimate end to the cycle of removing all operating systems.


I'm excited too, but be advised, people said the same thing about the revolutionary "write once, run anywhere" programing language and platform coming out from Sun Microsystems in the 90s...

"All of this has happened before, and it will all happen again." ;-)


Now that SharedArrayBuffer is back (for a while now), we have everything we need to do parallel programming in a browser. I feel like that unlocks enormous opportunities that weren’t previously realized with WASM.


It doesn't remove the OS, it "just" requires all C/C++ to be ported to the new WASM/Browser OS. Which isn't very exciting. The browser isn't meant to be an OS.


This is off-the-cuff, so forgive me if it sounds dumb ;

What about 'AI Tendrils' <-- tiny AI workers on little devices that feed the Mother AI - like an agent would monitor, but you tell it what info or behaviours are allowed and have that securly streamed back to the AI/model/LLM? (I dont know where the data would go) but then when you run your query/prompt youre running it against a much closer-to-processing-power than if you had to go query each device on a request?

--

I thought this output from GPT was interesting for thinking of the above:

https://i.imgur.com/WsxwuJh.png

and a BOM

https://i.imgur.com/5g1mXfp.png

https://i.imgur.com/MdbQKx1.png

https://i.imgur.com/Bi0PRTQ.png

--

Actually, I asked gpt to recommend additional columns, here is the result

User what columns would you suggest to add ChatGPT

Here are some additional columns that you may consider adding to the HN table:

* Display type: OLED, LCD, AMOLED, etc.

* Display resolution: the number of pixels on the screen.

* Water resistance: the level of protection against water damage.

* Heart rate monitor: whether the wearable device has a built-in heart rate monitor.

* Sleep tracking: whether the wearable device can track your sleep patterns.

* Music playback: whether the wearable device can play music and if it has built-in storage.

* NFC: whether the wearable device supports near-field communication for contactless payments.

* Voice assistant: whether the wearable device has a built-in voice assistant like Siri, Google Assistant, or Alexa.

* Other sensors: such as an accelerometer, gyroscope, barometer, altimeter, or compass.

Keep in mind that the more columns you add, the wider the table becomes, which may make it more difficult to view or analyze. So, you may want to choose the most important columns for your specific use case and audience.


Heat pumps and smart grid. Both relatively boring but very impactful for climate.


Is there any significant progress done in heat pump technology? I am not really following but from superficial impression there are only very small incremental improvements being done.

(writing from Sweden where residential geothermal heat pumps were installed more than 20 years ago even in our cold climate)


It is so good to see progress in these areas. They are so fundamental that we do not think about them (fundamental in the sense that we just assume they are there) a lot. I have some infos on heat-pumps but do you have a good source about the state of smart-grid?


like usual, basic php + mysql apps making money . they will never die


nice, some examples?


I'm excited about Windows 95, the world wide web, the potential of the world wide web for e-commerce, CD-ROM-based games like Myst, PDAs, digital cameras, 3D graphics and animation.

Sorry, I'm hallucinating.


Is there a way I can hallucinate to this time as well? :)


It all begins by accepting CRT monitors, PVMs and TVs into your heart.


Antiaging technology, like the research David Sinclair has done to reverse macular degeneration in mice, and more recently in monkeys, by using 3 of the Yamanaka factors.


It feels like there are two new cloud technologies arriving: generative ai & quantum computing. On the consumer side, there is consumer level demand (<$100/mo) for text-to-image and video synthesizer type playgrounds. But on the enterprise side, particularly around Wall Street & Gov/Defense, there is real thirst from the pointy heads in suits for new innovations like cloud KEM & "quantum resistant vaults". Even though these services havn't really been implemented as products by the Big Six cloud providers. And those enterprise quantum cloud products could easily fetch a premium (>$1000/mo) I hope ;)

Introducing post-quantum Cloudflare Tunnel

https://blog.cloudflare.com/post-quantum-tunnel/


New databases.

Is probably the most "under the radar" of all. Is interesting how little attention you get from people about this stuff (when talking about implementation or internals).

Is like, literally, only make a splash AFTER at-minimun you have something competitive with a major stablished product.

But you get more "views" talking about compilers than talking about DBs!

(except when talking about compilers that use the relational model. I probably should rebrand https://tablam.org as a "lisp"! -no accurate, but whatever-)

* I also pithy anyone doing a new OS, now that is crickets!


And improvement for existing ones like query optimizers and schedulers for SQL.


Are you talking about vector databases?


Databases in general (that is why I don't say RDBMS or NoSQL...)


surrealdb is amazing.


Do you have any insight into comparing SurrealDB (https://surrealdb.com/docs/introduction/start) to DuckDB (https://shell.duckdb.org/)? I haven't done that comparison yet, I might now based on your recommendation. I saw DuckDB at Data Day Texas 2023 and their demo was intriguing. That's even for our use case which was not very SQL-y.


I just started hearing about WASI. I think I've been living under a rock.



related: https://yewtu.be/watch?v=-UrdExQW0cs

Veritasium explains visually how quantum computers can break the current cryptography, taking the RSA signature as an example.


Stuff that's actually innovative.

ChatGPT knows a lot about existing tech - even more than people who think they're pretty steeped in it (including myself here) - but inventive stuff that doesn't show up after a bit of googlefu, it's much less helpful (even useless).

So great assistant, great rhythm section but falls down as lead.

For now, anyway (and open question, for how long?)


I'm personally excited about precision autonomous agriculture. Think about drones 'sniping' pests with tiny doses of pesticide instead of farmers spraying entire fields.


Are they being deployed today? Super exciting idea.


Drones have been used for crop monitoring for a while now. Using the aerial footage to determine the parts of crops in most need of specific resources such as pesticides, fertilizers, or hydration. Something a bit more automated like what this guy's talking about seems like it should be in use today or, if not, very soon. I remember reading about this stuff several years ago around 2018-2019 when I had a drone and looking at ways to make money with them. Of course, it may not be in widespread use.


There are a lot of wireless communication improvements related to WiFi and 5G. I think this opens a lot of doors, especially around low power apps.

Also, somewhat related: secure anonymous data exchange has a lot of cool possible applications beyond health exposure tracking.


One use case for vastly faster wireless communications: VR headsets. Right now if you use a headset wirelessly you're in for a much worse experience than running a VR application natively on the headset itself. If the headset had a fast enough (and low enough latency) wireless connection to your PC though it'd be an enormous leap forward for the technology.

Most VR application/game developers limit their polygons/resources to that of a mid-range mobile phone because that's about what the stand-alone headsets can deliver in terms of graphics/memory performance. However, if they knew that end users were going to be running their applications with PC-level hardware it would enable a whole heck of a lot more detail, better AI, level generation, and all-around functionality in VR applications/games.


If low power WiFi could be used instead of Zigbee and have the same battery life or better, that would make building home monitoring and automation devices easier.


Very selfishly - distributed compute. Not decentralized, distributed.

Compute and data are being created and run everywhere, we need platforms that understand how to use it and get insights without (or before) moving it.

Our contribution: https://github.com/bacalhau-project/bacalhau (think Kubernetes but built for the distributed world).

Disclosure: I co-founded the Bacalhau Project


All the private fusion companies making very interesting progress.

https://www.helionenergy.com/ https://cfs.energy/ https://www.tokamakenergy.co.uk/ https://tae.com/

Insert below; jokes about its being 25 years away, and comments about how it will never work etc


I'm very bullish on fusion given the new magnet breakthroughs, detailed here: https://www.youtube.com/watch?v=KkpqA8yG9T4 The SPARC reactor is still on-schedule to be completed and operational in 2025. It's expected to generate net-positive electricity, on the order of 140MW pulses. A larger version to supply energy to the grid is planned for the 2030's.

My concerns have moved to "okay, what is the cost of things even if energy is 'free'?". For example, desalination -- currently the majority of the cost of desalination is in the energy used, but even if the energy is free, the materials cost and capital expense is currently about 40% of the cost.

Will the cost of the inputs also go down dramatically as a result of fusion? Or will desalination still be cost prohibitive for most developing areas due to materials? If desalination price falls 99.9% -- what environmental costs will be associated with the extraction/transportation of necessary materials as well as the warm & highly saline effluent that will be dumped into surface waters?

With unlimited clean energy, some ridiculously inefficient schemes can become practical, such as removing CO2 from the atmosphere.


Well it's no joke, all the money chasing the mirage of D-T fusion for energy production; even if it works it will not be any kind of panacea. Other fusion types might be more realistic.


Nah, A(G)I will accelerate that. Give it a few months.

Edit: Dang... It was just a joke. (well partially, the timescale should have made it obvious still)


For the indefinite future, fusion will be 25 months away.


Drone tech, satellite tech, phone applications for war (I assume), and bringing war to social media are actively changing the state of warfare as we know it in Ukraine.

The idea that violence isn't necessarily bad and can also be a force for good is something I think will change the tech landscape. A lot of those people at google who said no to killer robots are probably having second thoughts, being one of those types of people, it certainly modulated my opinion.


Sorry, killing human beings en masse can never be a good thing.

Have some compassion. Countless Russians are being conscripted to fight a war they don't want to. These are all children of someone.

That things like https://en.wikipedia.org/wiki/Christmas_truce can happen speaks volumes about this. The wars are between countries, not their people.

Don't lose your humanity.


Here is a game, the greatest game, imho. I think you will like it and I think you will enjoy the outcome, the outcome doesn't directly agree with me and the game talks about the Christmas truce.

https://ncase.me/trust/

War itself is not evil. When someone owns a slave, there is already violence encoded in that relationship. The slave owner is already at war with the slave. When the slave fights back, that is just. The slave owner is a child of someone, too. Status of having a mother who cares about you has no bearing on justice. Serial killers have mothers, too.

You can say it is a war between countries and not their people, but it is people who are firing those guns and people building those weapons and people producing war media and people relaying commands to soldiers.

Your opinion is one of privilege. People with privilege can claim that humanity means "not fighting back." You can believe that because you exist in a country where people can fight back on your behalf. Privilege is the ability to have beliefs without having to deal with the consequences of those beliefs. You feel entitled to peace, but can't see that peace isn't always your choice you get to make, but a choice someone else makes.

In the game of prisoners dilemma, someone can choose to defect, you cannot make them cooperate. When someone defects against you, having empathy for them and choosing to "cooperate" just means you lose. "If you want peace, then prepare for war..." if you aren't prepared for war, then you are showing that when someone defects they can win.

You frame the war around helpless people being walked to their death's by corrupt aristocracies, but you fail to frame war with justice and ask what justice requires.

To Ukrainians it does not matter if it is the Russian government or the Russian people invading their lands. If you cannot see their plight, it is you who do not have humanity.

In regards to the Christmas truce, I do not believe Germany was fighting a fundamentally genocidal war while Russia is fighting a genocidal war against Ukraine. I don't think you can equate those.

I say this as someone who considers themselves deeply liberal, who has a history of thinking like you do and who has now come to the conclusion that what I thought (what you are saying) is very wrong: Check your privilege.


I think that when you see war as "the greatest game" you have already lost, already fallen down a hole that will never end.


I wasn't referring to war, I was referring to prisoners dilemma and in particular the ncase game's take on it.


Very well said, thank you for writing that.


i hope we get some starlink competitors.


how many more satellites you want in space lol?

On a more serious note, I always wanted to start a waste management company for Earth's geosynchronous orbit. I am pretty sure tech in that sector is booming.


Battery tech

Display panel tech

Silicon photonics


+1 for battery tech. Particularly interesting what Enovix is doing in this market.


Indeed. Lithium-air and sodium-air batteries look extremely promising, the former for higher power densities, and the latter for low temperature environments.

I can't wait to get a 500 KWH battery backup for my house for $10k...eventually...


>> I can't wait to get a 500 KWH battery backup

At first it sounded amazing but then when gazing at this number it struck me suddenly - isn't this like an energetic equivalent of 0,5 ton of TNT ?

So if I rephraze this statement to "I cannot wait when I will be able to store half a ton of TNT in my garage" this whole premise starts to sound less and less appealing.

(especially since there was a electric scooter battery explosion in flat near me which killed someone)


What is new in display panels?


Extreme panel curvature, new panel sizes, software baked into the display, technologies like quantum dot reaching near-oled quality at a fraction of the price.


What do you mean by "software baked into the display"?

(Like, firmware? Or is this more like an all-in-one type computer but the 'computer' part is equivalent to a Raspberry Pi/Arduino/etc [So - programmable, but somewhat minimal?]?)


Some high end displays for example have a piece of hardware to control the display, will all sorts of multi monitor settings. I suppose you could call this firmware but it’s much more advanced than the previous setup of a few buttons for menu driven use. They can also take software updates, I think?


Where can I find out about "software baked into the display" vendors, people to talk to? I've long been musing about resurrecting the drafting table for architecture.


> software baked into the display

now the display can directly serve ads!


With rapid progress in AR/VR, traditional display tech improvements are like horse technology advances in 1900.


One of the primary limitations in creating an immersive VR experience is a device with the necessary displays and optics. They have to be high resolution, lightweight, durable, low power consumption, and ultra sharp even through the most distorted optical filters. It’s basically the holy grail of display tech.


AR/VR needs display tech.


A lot of smart people in my network (infrastructure stuff) lately doing things around data on kubernetes and data / storage orchestration.


What do you mean by storage orchestration? Can you give some examples?


Miniaturization of VR headsets is finally a thing. Google the Bigscreen VR headset compared to a valve index.


Mars Sample Return going through PDR.

Intrinsic AI making a product announcement in two weeks.

Deepmind collaborating with Robotis implementing deep RL on the robocup humanoids to play soccer at the next level.


There was announcement about Deepmind joining RoboCup competition ?



HTML


I've recently realized that nobody under 30 has any idea what to do with an index.html file.


Odd. I'm under 30 and I've been writing HTML for around 15 years. Meanwhile, within the past year or so, I met a frontend dev at work that didn't even know what an HTTP request was. They were over 30 years old.

I think your attitude about the matter could use some adjustment. There is more to it than age.


My comment wasn't about writing HTML.


Well, I know for a fact that atleast two people over the age of 30 are actively teaching groups of high schoolers to write html files in notepad in a programming course.


How To Meet Ladies still works fellas!


Neurotech


Micro-LED, maybe. Apple is working on using it on the Apple Watch, and if that pans out, likely on larger devices.


Kubernetes Doesn’t get a lot of publicity /s


Graph Neural Networks and Geometric Deep Learning


text-to-speech


mojojs :D .... and vanilla everything.


compute in memory


AT Protocol?


Biology and energy.


[flagged]


Are they really doing any of those things? Looking at their Twitter, it is mostly retweets of giveaways. Seems pretty spammy and low quality.


Ouch. That’s a lot


lol




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: