Translate

Wednesday, September 9, 2020

Iron Ox, which uses AI-powered farming robots for its semi-autonomous greenhouse operations, has raised $20M Series B, bringing its total raise to $45M (Khari Johnson/VentureBeat)

Khari Johnson / VentureBeat:
Iron Ox, which uses AI-powered farming robots for its semi-autonomous greenhouse operations, has raised $20M Series B, bringing its total raise to $45M  —  Robotics farming company Iron Ox today announced the close of a $20 million funding round.  The funding will be used …



from Techmeme https://ift.tt/35iHpbP
via A.I .Kung Fu

2021 Lucid Air debuts with 1,080 horsepower Dream Edition - Roadshow

After incubating for years, Lucid's luxurious sedan is ready to hit the road with up to 500 miles of electric range.

from CNET News https://ift.tt/3m8sMOk
via A.I .Kung Fu

2021 Lucid Air reservations open with $1,000 deposit, $7,500 for 'Dream Edition' - Roadshow

Itching to be among the first in line for Lucid Motor's new 500-mile electric sedan? Here's how to reserve your very own Tesla killer.

from CNET News https://ift.tt/33h3OU2
via A.I .Kung Fu

Three-motor Lucid Air Performance could be a 1,300 hp Model S Plaid fighter - Roadshow

The dual-motor Air EV currently tops out at the 1,080-horsepower Dream Edition, but there's room in the chassis for a third motor and even more power.

from CNET News https://ift.tt/2Zn14ne
via A.I .Kung Fu

2021 Lucid Air vs. Tesla Model S and Porsche Taycan: Performance EVs compared - Roadshow

Lucid's new all-electric sport sedan promises quick charging times, gobs of performance and Level 3 autonomy.

from CNET News https://ift.tt/3igBWpt
via A.I .Kung Fu

2021 Lucid Air: Everything we know about pricing, specs and more - Roadshow

The latest would-be "Tesla killer" launches next year with massive power, huge range and the world's fastest charging system.

from CNET News https://ift.tt/33blnoQ
via A.I .Kung Fu

Portland, Oregon unanimously adopted ordinances banning the use of facial recognition tech by city agencies, including the police, and by private businesses (Kyle Wiggers/VentureBeat)

Kyle Wiggers / VentureBeat:
Portland, Oregon unanimously adopted ordinances banning the use of facial recognition tech by city agencies, including the police, and by private businesses  —  The Portland, Oregon City Council today unanimously voted to adopt two of the strongest bans of facial recognition technologies …



from Techmeme https://ift.tt/3ifFk3T
via A.I .Kung Fu

In an open letter to Mark Zuckerberg, 41 civil rights groups based in the US, UK, and New Zealand demand Ankhi Das be suspended pending audit of Facebook India (Russell Brandom/The Verge)

Russell Brandom / The Verge:
In an open letter to Mark Zuckerberg, 41 civil rights groups based in the US, UK, and New Zealand demand Ankhi Das be suspended pending audit of Facebook India  —  Circumstances ‘show the potential for genocide,’ the groups said in an open letter to Zuckerberg



from Techmeme https://ift.tt/32cuO88
via A.I .Kung Fu

AWAC: Accelerating Online Reinforcement Learning with Offline Datasets


Our method learns complex behaviors by training offline from prior datasets (expert demonstrations, data from previous experiments, or random exploration data) and then fine-tuning quickly with online interaction.

Robots trained with reinforcement learning (RL) have the potential to be used across a huge variety of challenging real world problems. To apply RL to a new problem, you typically set up the environment, define a reward function, and train the robot to solve the task by allowing it to explore the new environment from scratch. While this may eventually work, these “online” RL methods are data hungry and repeating this data inefficient process for every new problem makes it difficult to apply online RL to real world robotics problems. What if instead of repeating the data collection and learning process from scratch every time, we were able to reuse data across multiple problems or experiments? By doing so, we could greatly reduce the burden of data collection with every new problem that is encountered. With hundreds to thousands of robot experiments being constantly run, it is of crucial importance to devise an RL paradigm that can effectively use the large amount of already available data while still continuing to improve behavior on new tasks.

The first step towards moving RL towards a data driven paradigm is to consider the general idea of offline (batch) RL. Offline RL considers the problem of learning optimal policies from arbitrary off-policy data, without any further exploration. This is able to eliminate the data collection problem in RL, and incorporate data from arbitrary sources including other robots or teleoperation. However, depending on the quality of available data and the problem being tackled, we will often need to augment offline training with targeted online improvement. This problem setting actually has unique challenges of its own. In this blog post, we discuss how we can move RL from training from scratch with every new problem to a paradigm which is able to reuse prior data effectively, with some offline training followed by online finetuning.


Figure 1: The problem of accelerating online RL with offline datasets. In (1), the robot learns a policy entirely from an offline dataset. In (2), the robot gets to interact with the world and collect on-policy samples to improve the policy beyond what it could learn offline.

Challenges in Offline RL with Online Fine-tuning

We analyze the challenges in the problem of learning from offline data and subsequent fine-tuning, using the standard benchmark HalfCheetah locomotion task. The following experiments are conducted with a prior dataset consisting of 15 demonstrations from an expert policy and 100 suboptimal trajectories sampled from a behavioral clone of these demonstrations.


Figure 2: On-policy methods are slow to learn compared to off-policy methods, due to the ability of off-policy methods to “stitch" good trajectories together, illustrated on the left. Right: in practice, we see slow online improvement using on-policy methods.

1. Data Efficiency

A simple way to utilize prior data such as demonstrations for RL is to pre-train a policy with imitation learning, and fine-tune with on-policy RL algorithms such as AWR or DAPG. This has two drawbacks. First, the prior data may not be optimal so imitation learning may be ineffective. Second, on-policy fine-tuning is data inefficient as it does not reuse the prior data in the RL stage. For real-world robotics, data efficiency is vital. Consider the robot on the right, trying to reach the goal state with prior trajectory $\tau_1$ and $\tau_2$. On-policy methods cannot effectively use this data, but off-policy algorithms that do dynamic programming can, by effectively “stitching” $\tau_1$ and $\tau_2$ together with the use of a value function or model. This effect can be seen in the learning curves in Figure 2, where on-policy methods are an order of magnitude slower than off-policy actor-critic methods.


Figure 3: Bootstrapping error is an issue when using off-policy RL for offline training. Left: an erroneous Q value far away from the data is exploited by the policy, resulting in a poor update of the Q function. Middle: as a result, the robot may take actions that are out of distribution. Right: bootstrap error causes poor offline pretraining when using SAC and its variants.

2. Bootstrapping Error

Actor-critic methods can in principle learn efficiently from off-policy data by estimating a value estimate $V(s)$ or action-value estimate $Q(s, a)$ of future returns by Bellman bootstrapping. However, when standard off-policy actor-critic methods are applied to our problem (we use SAC), they perform poorly, as shown in Figure 3: despite having a prior dataset in the replay buffer, these algorithms do not benefit significantly from offline training (as seen by the comparison between the SAC(scratch) and SACfD(prior) lines in Figure 3). Moreover, even if the policy is pre-trained by behavior cloning (“SACfD (pretrain)”) we still observe an initial decrease in performance.

This challenge can be attributed to off-policy bootstrapping error accumulation. During training, the Q estimates will not be fully accurate, particularly in extrapolating actions that are not present in the data. The policy update exploits overestimated Q values, making the estimated Q values worse. The issue is illustrated in the figure: incorrect Q values result in an incorrect update to the target Q values, which may result in the robot taking a poor action.

3. Non-stationary Behavior Models

Prior offline RL algorithms such as BCQ, BEAR, and BRAC propose to address the bootstrapping issue by preventing the policy from straying too far from the data. The key idea is to prevent bootstrapping error by constraining the policy $\pi$ close to the “behavior policy” $\pi_\beta$: the actions that are present in the replay buffer. The idea is illustrated in the figure below: by sampling actions from $\pi_\beta$, you avoid exploiting incorrect Q values far away from the data distribution.

However, $\pi_\beta$ is typically not known, especially for offline data, and must be estimated from the data itself. Many offline RL algorithms (BEAR, BCQ, ABM) explicitly fit a parametric model to samples from the replay buffer for the distribution $\pi_\beta$. After forming an estimate $\hat{\pi}_\beta$, prior methods implement the policy constraint in various ways, including penalties on the policy update (BEAR, BRAC) or architecture choices for sampling actions for policy training (BCQ, ABM).

While offline RL algorithms with constraints perform well offline, they struggle to improve with fine-tuning, as shown in the third plot in Figure 1. We see that the purely offline RL performance (at “0K” in Fig.1) is much better than SAC. However, with additional iterations of online fine-tuning, the performance increases very slowly (as seen from the slope of the BEAR curve in Fig 1). What causes this phenomenon?

The issue is in fitting an accurate behavior model as data is collected online during fine-tuning. In the offline setting, behavior models must only be trained once, but in the online setting, the behavior model must be updated online to track incoming data. Training density models online (in the “streaming” setting) is a challenging research problem, made more difficult by a potentially complex multi-modal behavior distribution induced by the mixture of online and offline data. In order to address our problem setting, we require an off-policy RL algorithm that constrains the policy to prevent offline instability and error accumulation, but is not so conservative that it prevents online fine-tuning due to imperfect behavior modeling. Our proposed algorithm, which we discuss in the next section, accomplishes this by employing an implicit constraint, which does not require any explicit modeling of the behavior policy.


Figure 4: an illustration of AWAC. High-advantage transitions are regressed on with high weight, while low advantage transitions have low weight. Right: algorithm pseudocode.

Advantage Weighted Actor Critic

In order to avoid these issues, we propose an extremely simple algorithm - advantage weighted actor critic (AWAC). AWAC avoids the pitfalls in the previous section with careful design decisions. First, for data efficiency, the algorithm trains a critic that is trained with dynamic programming. Now, how can we use this critic for offline training while avoiding the bootstrapping problem, while also avoiding modeling the data distribution, which may be unstable? For avoiding bootstrapping error, we optimize the following problem:

We can compute the optimal solution for this equation and project our policy onto it, which results in the following actor update:

This results in an intuitive actor update, that is also very effective in practice. The update resembles weighted behavior cloning; if the Q function was uninformative, it reduces to behavior cloning the replay buffer. But with a well-formed Q estimate, we weight the policy towards only good actions. An illustration is given in the figure above: the agent regresses onto high-advantage actions with a large weight, while almost ignoring low-advantage actions. Please see the paper for an expanded derivation and implementation details.

Experiments

So how well does this actually do at addressing our concerns from earlier? In our experiments, we show that we can learn difficult, high-dimensional, sparse reward dexterous manipulation problems from human demonstrations and off-policy data. We then evaluate our method with suboptimal prior data generated by a random controller. Results on standard MuJoCo benchmark environments (HalfCheetah, Walker, and Ant) are also included in the paper.

Dexterous Manipulation


Figure 5. Top: performance shown for various methods after online training (pen: 200K steps, door: 300K steps, relocate: 5M steps). Bottom: learning curves on dextrous manipulation tasks with sparse rewards are shown. Step 0 corresponds to the start of online training after offline pre-training.

We aim to study tasks representative of the difficulties of real-world robot learning, where offline learning and online fine-tuning are most relevant. One such setting is the suite of dexterous manipulation tasks proposed by Rajeswaran et al., 2017. These tasks involve complex manipulation skills using a 28-DoF five-fingered hand in the MuJoCo simulator: in-hand rotation of a pen, opening a door by unlatching the handle, and picking up a sphere and relocating it to a target location. These environments exhibit many challenges: high dimensional action spaces, complex manipulation physics with many intermittent contacts, and randomized hand and object positions. The reward functions in these environments are binary 0-1 rewards for task completion. Rajeswaran et al. provide 25 human demonstrations for each task, which are not fully optimal but do solve the task. Since this dataset is very small, we generated another 500 trajectories of interaction data by constructing a behavioral cloned policy, and then sampling from this policy.

First, we compare our method on the dexterous manipulation tasks described earlier against prior methods for off-policy learning, offline learning, and bootstrapping from demonstrations. The results are shown in the figure above. Our method uses the prior data to quickly attain good performance, and the efficient off-policy actor-critic component of our approach fine-tunes much quicker than DAPG. For example, our method solves the pen task in 120K timesteps, the equivalent of just 20 minutes of online interaction. While the baseline comparisons and ablations are able to make some amount of progress on the pen task, alternative off-policy RL and offline RL algorithms are largely unable to solve the door and relocate task in the time-frame considered. We find that the design decisions to use off-policy critic estimation allow AWAC to significantly outperform AWR while the implicit behavior modeling allows AWAC to significantly outperform ABM, although ABM does make some progress.

Fine-Tuning from Random Policy Data

An advantage of using off-policy RL for reinforcement learning is that we can also incorporate suboptimal data, rather than only demonstrations. In this experiment, we evaluate on a simulated tabletop pushing environment with a Sawyer robot.

To study the potential to learn from suboptimal data, we use an off-policy dataset of 500 trajectories generated by a random process. The task is to push an object to a target location in a 40cm x 20cm goal space.

The results are shown in the figure to the right. We see that while many methods begin at the same initial performance, AWAC learns the fastest online and is actually able to make use of the offline dataset effectively as opposed to some methods which are completely unable to learn.

Future Directions

Being able to use prior data and fine-tune quickly on new problems opens up many new avenues of research. We are most excited about using AWAC to move from the single-task regime in RL to the multi-task regime, with data sharing and generalization between tasks. The strength of deep learning has been its ability to generalize in open-world settings, which we have already seen transform the fields of computer vision and natural language processing. To achieve the same type of generalization in robotics, we will need RL algorithms that take advantage of vast amounts of prior data. But one key distinction in robotics is that collecting high-quality data for a task is very difficult - often as difficult as solving the task itself. This is opposed to, for instance computer vision, where humans can label the data. Thus, the active data collection (online learning) will be an important piece of the puzzle.

This work also suggests a number of algorithmic directions to move forward. Note that in this work we focused on mismatched action distributions between the policy $\pi$ and the behavior data $\pi_\beta$. When doing off-policy learning, there is also a mismatched marginal state distribution between the two. Intuitively, consider a problem with two solutions A and B, with B being a higher return solution and off-policy data demonstrating solution A provided. Even if the robot discovers solution B during online exploration, the off-policy data still consists of mostly data from path A. Thus the Q-function and policy updates are computed over states encountered while traversing path A even though it will not encounter these states when executing the optimal policy. This problem has been studied previously. Accounting for both types of distribution mismatch will likely result in better RL algorithms.

Finally, we are already using AWAC as a tool to speed up our research. When we set out to solve a task, we do not usually try to solve it from scratch with RL. First, we may teleoperate the robot to confirm the task is solvable; then we might run some hard-coded policy or behavioral cloning experiments to see if simple methods can already solve it. With AWAC, we can save all of the data in these experiments, as well as other experimental data such as when hyperparameter sweeping an RL algorithm, and use it as prior data for RL.


A preprint of the work this blog post is based on is available here. Code is now included in rlkit. The code documentation also contains links to the data and environments we used. The project website is available here.



from The Berkeley Artificial Intelligence Research Blog https://ift.tt/2DJj3fR
via A.I .Kung Fu

Verizon Media teams up with NFL to let fans virtually watch games together - CNET

The feature will eventually extend to a variety of sporting and music events.

from CNET News https://ift.tt/3hjthBo
via A.I .Kung Fu

Tenet: That ending explained and all your questions answered - CNET

Christopher Nolan's latest mind-bender might seem mind-boggling. Here are some answers to what the hell happened.

from CNET News https://ift.tt/3bKknvr
via A.I .Kung Fu

The 30 best movies to see on Netflix - CNET

Don't know what to watch tonight? Here are some of the best movies Netflix has to offer.

from CNET News https://ift.tt/2R8VXCn
via A.I .Kung Fu

The frequently scarce Nintendo Switch Lite is in stock right now - CNET

With production promising to return to normal soon, here's the status of the $200 version of Nintendo's gaming console.

from CNET News https://ift.tt/32mUvBS
via A.I .Kung Fu

The Nintendo Switch is back in stock -- both Red/Blue and Gray models - CNET

Here are the latest details on where you can buy the red-hot $300 console online.

from CNET News https://ift.tt/2GgpXtU
via A.I .Kung Fu

Put the new August Smart Lock with Wi-Fi on your front door for $203 - CNET

That's the lowest price ever for this new, fourth-generation smart lock.

from CNET News https://ift.tt/3bKnBPz
via A.I .Kung Fu

Save $250 on the smartphone-controlled Brava countertop oven - CNET

Make entire meals in this oven that can bake, broil, sear, toast, reheat and even dehydrate.

from CNET News https://ift.tt/3bFsnOy
via A.I .Kung Fu

Eric Ries' Silicon Valley-based Long-Term Stock Exchange opens for trading, nine years after "The Lean Startup" author first proposed it (Biz Carson/Protocol)

Biz Carson / Protocol:
Eric Ries' Silicon Valley-based Long-Term Stock Exchange opens for trading, nine years after “The Lean Startup” author first proposed it  —  Trading is starting on the Long-Term Stock Exchange.  Next up?  Attracting a listing.  —  A new stock exchange backed by Silicon Valley heavyweights …



from Techmeme https://ift.tt/2RaO13F
via A.I .Kung Fu

In an updated prospectus, Palantir says it has 1.64B shares outstanding as of Sept. 1, indicating the company is valued at ~$10.5B, down from $20.4B in 2015 (Ari Levy/CNBC)

Ari Levy / CNBC:
In an updated prospectus, Palantir says it has 1.64B shares outstanding as of Sept. 1, indicating the company is valued at ~$10.5B, down from $20.4B in 2015  —  - Palantir said in its updated prospectus on Wednesday that it has 1.64 billion shares outstanding, as of Sept. 1.



from Techmeme https://ift.tt/33buO7H
via A.I .Kung Fu

Unity files S-1 for its IPO, seeking to raise up to $1.05B at a range of $34-$42 per share, giving it a valuation of ~$11.06B (C Nivedita/Reuters)

C Nivedita / Reuters:
Unity files S-1 for its IPO, seeking to raise up to $1.05B at a range of $34-$42 per share, giving it a valuation of ~$11.06B  —  (Reuters) - Unity Software Inc will look to raise up to $1.05 billion in its initial public offering, it said on Wednesday, giving the Silicon Valley-based software startup …



from Techmeme https://ift.tt/35lbc3v
via A.I .Kung Fu

Israel-based Pcysys, which develops an automated penetration testing service for cybersecurity risk assessment, raises $25M Series B led by Insight Partners (Globes Online)

Globes Online:
Israel-based Pcysys, which develops an automated penetration testing service for cybersecurity risk assessment, raises $25M Series B led by Insight Partners  —  Pcysys (Proactive Cybersystems) has developed PenTera, an Automated Penetration Testing platform.



from Techmeme https://ift.tt/2FfHpyv
via A.I .Kung Fu

Tuesday, September 8, 2020

Optimize.health, formerly called Pillsy, raises $15.6M for its end-to-end remote patient monitoring service, says its revenue recently increased by 800%+ YoY (Taylor Soper/GeekWire)

Taylor Soper / GeekWire:
Optimize.health, formerly called Pillsy, raises $15.6M for its end-to-end remote patient monitoring service, says its revenue recently increased by 800%+ YoY  —  Seattle startup Optimize.health raised $15.6 million to help meet demand for its remote patient monitoring technology as investor interest …



from Techmeme https://ift.tt/3hcNhpt
via A.I .Kung Fu

DevOps company JFrog has set terms for its US IPO with a price range of $33 to $37, valuing the company between $3B and $3.3B (Shiri Habib-Valdhorn/Globes Online)

Shiri Habib-Valdhorn / Globes Online:
DevOps company JFrog has set terms for its US IPO with a price range of $33 to $37, valuing the company between $3B and $3.3B  —  The Israeli software company's founders will sell shares for about $50 million.  —  Israeli automatic software updating company JFrog today set terms for its Wall Street Initial Public Offering (IPO).



from Techmeme https://ift.tt/35hFFj5
via A.I .Kung Fu

The best battery-powered portable power stations of 2020 - CNET

Get power wherever you are with a portable generator.

from CNET News https://ift.tt/3k0bBwp
via A.I .Kung Fu

Senate Republicans introduce bill aimed at modifying Section 230 - CNET

New legislation would base legal liability protections on whether content moderation is being conducted on an "objectively reasonable belief" standard.

from CNET News https://ift.tt/3k3BnQx
via A.I .Kung Fu

2021 Land Rover Defender range adds short-wheelbase 90 and new trim levels - Roadshow

US customers can finally order the two-door Defender 90.

from CNET News https://ift.tt/2R9G2nn
via A.I .Kung Fu

The 2022 Genesis G70 refresh looks awesome - Roadshow

The interior gets a much-needed infotainment update, too.

from CNET News https://ift.tt/2FbwzJZ
via A.I .Kung Fu

LG Wing: Video of LG's supposed wacky swiveling phone leaked - CNET

You do you, LG.

from CNET News https://ift.tt/3mbuvCD
via A.I .Kung Fu

The moon is rusting, and Earth is to blame - CNET

Researchers find rust on the moon, which is weird but not entirely unexplainable.

from CNET News https://ift.tt/3bQHQeR
via A.I .Kung Fu

First The Mandalorian season 2 images reveal Baby Yoda is still a baby - CNET

The Star Wars legend is back and hasn't aged a day.

from CNET News https://ift.tt/3bDeSyK
via A.I .Kung Fu

Best home tech of fall 2020: Coffee makers, fire pits and other cozy gadgets - CNET

These products will help keep you cozy in the colder months.

from CNET News https://ift.tt/32cLIU5
via A.I .Kung Fu

Oxford coronavirus vaccine trial on hold after adverse reaction in participant - CNET

The phase 3 clinical trial has been paused while researchers investigate what caused the reaction.

from CNET News https://ift.tt/3m0u8uh
via A.I .Kung Fu

Thunes, a B2B fintech startup developing a cross-border payments network for emerging markets, raises $60M Series B; the company now operates in ~100 countries (Catherine Shu/TechCrunch)

Catherine Shu / TechCrunch:
Thunes, a B2B fintech startup developing a cross-border payments network for emerging markets, raises $60M Series B; the company now operates in ~100 countries  —  Thunes, a Singapore-based startup developing a cross-border payments network to make financial services more accessible in emerging markets …



from Techmeme https://ift.tt/2Zjg05I
via A.I .Kung Fu

Data analytics company Sumo Logic is looking to raise $310.8M in its US IPO at a price range of $17 to $21 per share, valuing the company at over $2B (Reuters)

Reuters:
Data analytics company Sumo Logic is looking to raise $310.8M in its US IPO at a price range of $17 to $21 per share, valuing the company at over $2B  —  (Reuters) - Big data firm Sumo Logic Inc said on Tuesday it was looking to raise $310.8 million in a U.S. initial public offering that could value the company at over $2.07 billion.



from Techmeme https://ift.tt/2Flivgx
via A.I .Kung Fu

Hasura, a service that helps developers access databases via its open source GraphQL API, raised $25M Series B led by Lightspeed Venture Partners (Frederic Lardinois/TechCrunch)

Frederic Lardinois / TechCrunch:
Hasura, a service that helps developers access databases via its open source GraphQL API, raised $25M Series B led by Lightspeed Venture Partners  —  Hasura, a service that provides developers with an open-source engine that provides them a GraphQL API to access their databases …



from Techmeme https://ift.tt/2ZjYX3D
via A.I .Kung Fu

Facebook engineer quits, accuses social network of 'profiting off hate' - CNET

The social media giant is facing a backlash from its own employees over content moderation decisions.

from CNET News https://ift.tt/3jWNzlZ
via A.I .Kung Fu

Kim Kardashian says Keeping up with the Kardashians will end in 2021 - CNET

The reality series launched in 2007, and produced a dozen spin-off shows.

from CNET News https://ift.tt/3idomDt
via A.I .Kung Fu

Celebrate Psycho's 60th anniversary by buying it in 4K for $9 - CNET

You can get three other Hitchcock films in 4K -- Rear Window, The Birds and Vertigo -- for $9 each as well.

from CNET News https://ift.tt/2DHeG4Z
via A.I .Kung Fu

This dress can read your mind and changes shape accordingly - CNET

It's like wearing your neurons on your sleeve.

from CNET News https://ift.tt/2ZlXjya
via A.I .Kung Fu

15 best TV shows to binge on Amazon Prime Video - CNET

Searching for a great show to watch tonight? Let's round up Amazon's best gems.

from CNET News https://ift.tt/3a0nUoD
via A.I .Kung Fu

Tenet: The ending explained and all your questions answered - CNET

Christopher Nolan's latest mind-bender can be mind-boggling. Here are some answers to what the hell happened.

from CNET News https://ift.tt/3hubA2U
via A.I .Kung Fu

Tesla stock drops 21% in a single day after not being added to the S&P 500 - Roadshow

This is a notable reversal of trend, given that Tesla's stock has soared in value over the last year.

from CNET News https://ift.tt/2Zl2OgK
via A.I .Kung Fu

Report: Samsung and SK Hynix plan to stop component sales to Huawei on September 15, the day US Commerce Department limits take effect (Adi Robertson/The Verge)

Adi Robertson / The Verge:
Report: Samsung and SK Hynix plan to stop component sales to Huawei on September 15, the day US Commerce Department limits take effect  —  SK Hynix is also reportedly dropping Huawei  —  Samsung and SK Hynix will reportedly stop selling components to Huawei as the Trump administration tightens sanctions on the Chinese phone maker.



from Techmeme https://ift.tt/3bBCLqw
via A.I .Kung Fu

Indian online learning giant Byju's raises $500M led by Silver Lake, source says at a $10.8B valuation (Manish Singh/TechCrunch)

Manish Singh / TechCrunch:
Indian online learning giant Byju's raises $500M led by Silver Lake, source says at a $10.8B valuation  —  Byju's has raised $500 million in a new financing round that valued the Indian online learning platform at $10.8 billion, a source familiar with the matter said.



from Techmeme https://ift.tt/3bAshHX
via A.I .Kung Fu

Microsoft confirms a budget version of its next-gen console, the $299 Xbox Series S, says it is the "smallest Xbox ever" (Christine Fisher/Engadget)

Christine Fisher / Engadget:
Microsoft confirms a budget version of its next-gen console, the $299 Xbox Series S, says it is the “smallest Xbox ever”  —  After a flurry of leaks, Microsoft has been forced to prematurely confirm the existence of a second-generation console: the Xbox Series S. The company …



from Techmeme https://ift.tt/3jXzgh6
via A.I .Kung Fu

The best office chairs to buy this year - CNET

If your office chair is causing you pain, it might be time for an upgrade.

from CNET News https://ift.tt/2GFCY0t
via A.I .Kung Fu

Microsoft confirms $300 Xbox Series S

Xbox Series S.
Microsoft has taken to Twitter to confirm that the Xbox Series S is a real, and the budget version of its next-gen console will cost $300.Read More

from VentureBeat https://ift.tt/3bAhgX7
via A.I .Kung Fu

Researchers propose using AI in collaboration with human input to draw up electoral districts to combat gerrymandering (Science)

Science:
Researchers propose using AI in collaboration with human input to draw up electoral districts to combat gerrymandering  —  1Departments of Political Science, Statistics, Mathematics, and Asian American Studies, University of Illinois at Urbana-Champaign, Champaign, IL, USA.



from Techmeme https://ift.tt/3ia31L7
via A.I .Kung Fu

Monday, September 7, 2020

Adult Swim cancels The Venture Bros after 17 years - CNET

The popular adult animated cartoon spanned seven seasons over 17 years making it one of the longest-running original series on Adult Swim.

from CNET News https://ift.tt/32dcax3
via A.I .Kung Fu

China takes aim at US 'bullying' of its tech firms

The Chinese government has clashed with America over tech security and protecting consumers' data.

from BBC News - Technology https://ift.tt/3h9Nt8A
via A.I .Kung Fu

30 of the best movies to stream on Disney Plus - CNET

Searching for what to watch other than Marvel or Star Wars? Here are some of the hidden gems on Disney Plus.

from CNET News https://ift.tt/35erU4x
via A.I .Kung Fu

Mulan boycott explained: Why some fans are skipping Disney's new remake - CNET

The credit reel in Disney's new remake have caused a huge stir on social media.

from CNET News https://ift.tt/2Fi9GnH
via A.I .Kung Fu

Astronomers find no signs of alien tech after scanning over 10 million stars - CNET

An Australian telescope examined a huge patch of the sky for signs of life. It came up empty-handed.

from CNET News https://ift.tt/2R5qnFG
via A.I .Kung Fu

Xbox Series S design and price reportedly revealed in leaked image - CNET

The price is reportedly $299 US.

from CNET News https://ift.tt/2Zd8RDN
via A.I .Kung Fu