• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle
  • I think I may not be presenting my position well, and thus am coming off as a right wing partisan hack of the sort that wants to defund the EPA. That’s not my position.

    A lot of people (mostly conservatives and big businesses) that complain about ‘red tape’ as a way of attacking various regulations. For example, people will say it’s impossible to build a power plant because of environmental red tape.
    A lot of that regulation is positive though. For example, even if the land is cheap, you can’t build a power plant next to a nature preserve because the pollution will kill all the birds. And I like that regulation. The power people will of course complain as will the mines that were going to sell the plant coal. In cases like this, IMHO, they can all fuck off.

    At the same time though, the ‘red tape’ that many businesses complain about does sometimes actually exist. That is, to do business you have to get endless streams of licenses, approvals, permits, etc for things where the bureaucracy and licensing process adds little or no value to either the industry or the population at large.
    From what I’ve read, this sort of thing exists a lot in Germany. I’ve talked to a few people who were starting a business in Europe and they specifically avoided a few countries for that reason.



  • SirEDCaLot@lemmy.fmhy.mltoSelfhosted@lemmy.worldSynology vs DIY
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Honestly I think you’ll be happy either way. Synology is very very good at some things. And the software makes it very easy and approachable to spin up a lot of private cloud type stuff without a lot of technical messing around. That said, you will get more hardware/performance for your dollar with a PC server. You can go the DIY route, or if you don’t mind a little more power consumption and want more performance buy a used Dell PowerEdge on eBay. Based on what you say, I think you’ll be happy either way. The real value you get from Synology is their software. Their photo app is very wife friendly. And I don’t think you’ll find any serious restrictions with it, you get full root SSH access into the box.

    So I guess my suggestion would be evaluate the photo management in TrueNAS versus Synology. You can spin up a virtual machine of TrueNAS on your desktop and play with it if you want. The only other gotcha is if you want Plex to do transcoding you definitely want the PC because you can throw in a GPU and accelerate that a lot.

    //edit- the one other thing to mention is backups- Synology has GREAT backup software and it’s free. Active Backup for Business will back up your desktop/laptop, versioned, deduplicated, very efficiently. And Hyper Backup will backup your Synology itself (or some parts of it) to the cloud, optionally with client-side encryption. I suggest Wasabi as the backend for that, it’s only like $7/TB/mo. Or just get another Synology and put it at the house of someone you know and you have an instant offsite backup with no recurring cost.





  • I agree with this 100%. That affects both the types of interactions, and the types of users.

    When Reddit really took off 12 or so years ago, it was primarily a forum for discussion. I loved it because there would be in-depth, respectful, quality discussions on almost every page. I spent hours debating science and politics and technology and relationships and other things of substance with other intelligent respectful open-minded people.

    For a few years now, Reddit has been trying to become a quick content scroll app- bombarding the user with page after page of memes and videos and low effort crap that only holds attention for 12 seconds but results in another page load and thus another ad impression. In ‘new reddit’ and the apps, there’s very little focus on discussion or comments. Just quick content to flip through.

    And that affects the discussions on Reddit (quality discussions are now the exception rather than the norm) and also the people who join and stay at the site. There’s a lot more animosity, assumption of bad faith, etc.

    But I also think that because Lemmy’s design DOESN’T push people into quick content, but IS focused on discussions, that trend can reverse. People who want quick content will quickly grow bored here and leave. And we can keep the discussions respectful and open-minded.

    I also think that the ‘welcome to lemmy’ posts should talk more about community and culture; what sort of interactions users should and shouldn’t expect here. That should include an explicit warning that if you’re going to start arguments and assume everyone else is an idiot, this probably isn’t for you, but if you want to have good respectful discussions this is your new home.


  • While it has its benefits; is it suitable for vehicles, particularly their safety systems? It isn’t clear to me, as it is a double-edged sword.

    Perhaps, but if you are developing a tech that can save lives, doesn’t it make sense to put that out in more cars faster?

    I would be angry that such a modern car with any form of self driving doesn’t have emergency braking. Though, that would require additional sensors…

    Tesla does this with cameras whether you pay for FSD or not. It can also detect if you’re near an object and slam on gas instead of brake, it will cancel that out. These are options you can turn off if you don’t want them.

    I’d also be angry that L2 systems were allowed in that environment in the first place, but as you say it is ultimately the drivers fault.

    I’m saying- imagine if the car has L2 self driving, and the driver had that feature turned off. The human was driving the car. The human didn’t react quickly enough to prevent hitting your loved one, but the computer would have.
    Most of the conversation around FSD type tech revolves around what happens when it does something wrong that the human would have done right. But as the tech improves, we will get to the point where the tech makes fewer mistakes than the human. And then this conversation reverses- rather than ‘why did the human let the machine do something bad’ it becomes ‘why did the machine let the human do something bad’.

    I would hope that the manufacturer would make it difficult to use L2 outside of motorway driving.

    Why? Tesla’s FSD beta L2 is great. It’s not perfect, but it does a very good job for most parts of driving on surface streets.

    I would prefer they had no self driving rather than be under the mistaken impression the car could drive for them in the current configuration. The limitations of self driving (in any car) are often not clear to a lot of people and can vary greatly.

    This is valid. I think the name ‘full self driving’ is problematic somewhat. I think it will get to the point of actually being fully self driving, and I think it will get there soon (next year or two). But they’ve been using that term for several years now and especially the first few versions of ‘FSD’ were anything but. And before they started with driver monitoring, there were a bunch of people who bought ‘FSD’ and trusted it a lot more than they should have.

    If Tesla offer a half-way for less money would you not expect the consumer to take the cheapest option? If they have an accident it is more likely someone else is injured, so why pay more to improve the self driving when it doesn’t affect them?

    That’s not how their pricing works. The safety features are always there. The hardware is always there. It’s just a function of what software you get. And if you don’t buy FSD when you buy the car, you can buy it later and it will be unlocked over the air.
    What you get is extra functionality. There is no ‘my car ran over a little kid on a bike because I didn’t pay for the extra safety package’. It’s ‘my car won’t drive itself because I didn’t pay for that, I just get a smart cruise control’.

    Tesla is the only company I know steadfastly refusing to use any other sensor types and the only reason I see is price.

    Price yes, and difficulty integrating different data sets. On their higher end cars they’ve re-introduced a high resolution radar unit. Haven’t see much on how that’s being used though.
    The basic answer is they can get to where we need with cameras alone because our software is better than others. For any other automaker that doesn’t have Tesla’s AI systems, LiDAR is important.

    Another concern is that any Tesla incidents, however rare, could do huge damage to people’s perception of self driving.

    This already happens whether the computer is driving or not. Lots of people don’t understand Teslas and think that if you buy one it’ll drive you into a brick wall and then catch on fire while you’re locked inside. Bad journalists will always put out bad journalism. That’s not a reason to stop tech progress tho.

    If Tesla is much cheaper than LIDAR-equipped vehicles will this kill a better/safer product a-la betamax?

    Right now FSD isn’t a main selling point for most drivers. I’d argue that what might kill others is not that Tesla’s system is cheaper, but that it works better and more of the time. Ford and GM both have a self driving system, but it only works on certain highways that have been mapped with centimeter-level LiDAR ahead of time. Tesla has a system they’re trying to make general purpose, so it can drive on any road. So if the Tesla system takes you driveway-to-driveway and the competition takes you onramp-to-offramp, the Tesla system is more flexible and thus more valuable regardless of the purchase price.

    Do you pick your airline based on the plane they fly and it’s safety record or the price of the ticket, being confident all aviation is held to rigorous safety standards? As has been seen recently with a certain submarine, safety measures should not be taken lightly.

    I agree standards should apply, that’s why Tesla isn’t L3+ certified even though on the highway I really think it’s ready for it.


  • Not sure the exact details- I heard they were sampling 10 bits per pixel but a bunch of their release notes talked about photon count detection back when they switched to that system.
    Given that the HW3 cameras started being used to just generate RGB images, I suspect the current iteration is working by just pulling RAW format frames and interpreting them as a photon count grid, from there detecting edges and geometry with the occupancy network.

    I’ve not seen much of anything published by Tesla on the subject. I suspect most of their research they are keeping hush hush to get a leg up on the competition. They share everything regarding EV tech because they want to push the industry in that direction, but I think they see FSD as their secret sauce that they might sell hardware kits but not let others too far under the hood.


  • In our town we had a Tesla shoot through red traffic lights near our local school barely missing a child crossing the road. The driver was looking at their lap (presumably their phone). I looked online and apparently autopilot doesn’t work with traffic lights, but FSD does?

    There’s a few versions of this and several generations with different capability. The early Tesla Autopilot had no recognition of stop signs, it was literally just ‘cruise control that keeps you in your lane’. FSD for sure does recognize stop signs, traffic lights, etc and reacts correctly to them. I BELIEVE that the current iteration of Traffic Aware Cruise Control (what you get if you don’t pay extra for FSD or Enhanced Autopilot) will stop for traffic lights but I could be wrong on that. I know it detects pedestrians but its detection isn’t nearly as advanced as FSD.

    I will give you that in theory, the time-of-flight data from a LiDAR pulse will give you a more reliable point cloud than anything you’d get from cameras. But I also know Tesla is doing things with cameras that border on black magic. They gave up on getting images out of the cameras and are now just using the raw photon count data from the sensor, and with the AI trained it can apparently detect edges with only a few photons of difference between pixels (below the noise floor). And I can say from experience that a few times I’ve been in blackout rainstorms where even with full wipers I can barely see anything, and the FSD visualization doesn’t skip a beat and it sees other cars before I do.

    Would you still feel the same about Tesla if your car injured/killed someone or if someone you care about was injured/killed by a Tesla?

    As a Level 2 system, the Tesla is not capable of injuring or killing someone. The driver is responsible for that.

    But I’d ask- if a Tesla saw YOUR loved one in the road, and it would have reacted but it wasn’t in FSD mode and the human driver reacted too slowly, how would you feel about that? I say this not to be contrarian, but because we really are approaching the point where the car has better situational awareness than the human.

    If we can put extra sensors in and it objectively makes it safer why don’t we? Self driving cars are a luxury.

    For the reason above with the loved one. If you can use cameras and make a system that costs the manufacturer $3000/car, and it’s 50 times safer than a human, or use LiDAR and cost the manufacturer $10,000/car, and it’s 100 times safer than a human, which is safer?
    The answer is the cameras, because it will be on more cars, thus deliver more overall safety.
    I understand the thinking that ‘Elon cheaped out, Tesla FSD is a hack system on shitty hardware that uses clever programming to work around a cut-rate sensor suite’. But I’d also argue- if they can get similar performance out of a camera, and put it on more cars, doesn’t that do more to overall improve safety?

    In the example above, if the car didn’t have the self driving package because the guy couldn’t afford it, wouldn’t you prefer that a decent but better than human self driving system was on the car?


  • Don’t have the paper, my info comes mainly from various interviews with people involved in the thing. Elon of course, Andrej Karpathy is the other (he was in charge of their AI program for some time).

    They apparently used to use feature detection and object recognition in RGB images, then gave up on that (as generating coherent RGB images just adds latency and object recognition was too inflexible) and they’re now just going by raw photon count data from the sensor fed directly into the neural nets that generate the 3d model. Once trained this apparently can do some insane stuff like pull edge data out from below the noise floor.

    This may be of interest– This is also from 2 years ago, before Tesla switched to occupancy networks everywhere. I’d say that’s a pretty good equivalent of a LiDAR scan…


  • Or maybe power grids are teetering because utilities raked in profit for the last two decades by ignoring upgrades that would obviously be necessary… Just a thought :)

    My utility sells $400 Wi-Fi touchscreen thermostats for like $25, the catch being you let them turn your AC down/off when grid load peaks. A few truckloads of thermostats are cheaper than grid upgrades, so they do the thermostats and kick the can down the road more.



  • My point stands- drive the car.
    You’re 100% right with everything you say. It has to work 100% of the time. Good enough most of the time won’t get to L3-5 self driving.

    Camera only is not authorize in most logistic operation in factory, im not sure what changes for a car.

    The question is not the camera, it’s what you do with the data that comes off the camera.
    The first few versions of camera-based autopilot sucked. They were notably inferior to their radar-based equivalents- that’s because the cameras were using neural network based image recognition on each camera. So it’d take a picture from one camera, say ‘that looks like a car and it looks like it’s about 20’ away’ and repeat this for each frame from each camera. That sorta worked okay most of the time but it got confused a lot. It would also ignore any image it couldn’t classify, which of course was no good because lots of ‘odd’ things can threaten the car. This setup would never get to L3 quality or reliability. It did tons of stupid shit all the time.

    What they do now is called occupancy networks. That is, video from ALL cameras is fed into one neural network that understands the geometry of the car and where the cameras are. Using multiple frames of video from multiple cameras at once, it then generates a 3d model of the world around the car and identifies objects in it like what is road and what is curb and sidewalk and other vehicles and pedestrians (and where they are moving and likely to move to), and that data is fed to a planner AI that decides things like where the car should accelerate/brake/turn.
    Because the occupancy network is generating a 3d model, you get data that’s equivalent to LiDAR (3d model of space) but with much less cost and complexity. And because you only have one set of sensors, you don’t have to do sensor fusion to resolve discrepancies between different sensors.

    I drive a Tesla. And I’m telling you from experience- it DOES work. The latest betas of full self driving software are very very good. On the highway, the computer is a better driver than me in most situations. And on local roads- it navigates them near-perfectly, the only thing it sometimes has trouble with is figuring out when is it’s turn in an intersection (you have to push the gas pedal to force it to go).

    I’d say it’s easily at L3+ state for highway driving. Not there yet for local roads. But it gets better with every release.




  • I’m not sure what kind of serious trouble they are actually in. I have spent most of today being driven around by my Tesla, and aside from the occasional badly handled intersection and unnecessary slowdown it’s doing fucking great. So I would Tell anyone who says Tesla is in serious trouble, just go drive the car. Actually use the FSD beta before you say that it’s useless. Because it’s not. It is already far better than anyone expected vision only driving to be, and every release brings more improvements. I’m not saying that is a Tesla fanboy. I’m saying that as a person who actually drives the car.


  • Hopefully the whole ‘delivery service partner’ bullshit gets busted open. It’s nothing but a cutout.

    Amazon hires a 3rd party company (the DSP) to deliver packages, then pays them for this. That would be two independent companies. Issue is, Amazon dictates nearly every aspect of the DSP business. So I would love to see it argued in court that there is not actually any separation between DSP and Amazon, since Amazon policy is in charge of working conditions for the DSP employees (and crucially NOT the DSP management), Amazon is required to negotiate with the workers.

    Amazon will of course just fire the DSP here for non-performance. This is probably going to end up in court. I hope the union wins.