Dog does the CUTEST little dance when he wants to play (Video)


Source link

Related posts Tagged : / / / / /

What does a farmer gotta do to take a poop in peace? (Video)

shit in peace leadVia TikTok/agro_lele

Source link

Related posts Tagged : / / / / /

TRIVIUM’s MATT HEAFY Does A Metal Rendition Of “Burn Butcher Burn” From The Witcher Season 2

Trivium guitarist and vocalist Matt Heafy recently celebrated the release of The Witcher Season 2 with a cover of “Burn Butcher Burn.” Now he’s back with a more metal take on the song and of course it rules. The song is sung by Joey Batey, who plays Jaskier the bard, and was written by both Batey and Joseph Trapanese.

Advertisement. Scroll to continue reading.

Heafy signed a solo deal with Trivium‘s label Roadrunner Records in 2020, and celebrated with a cover of the Internet-hit song “Toss a Coin to Your Witcher” from the first season of The Witcher. You can check that out here.

Want More Metal? Subscribe To Our Daily Newsletter

Enter your e-mail below to get a daily update with all of our headlines.

Source link

Related posts Tagged : / / / / /

Wendy’s Twitter doing what it does best: dishing out savage burns (42 Photos)

ThumbnailTemplateNEW 23

Source link

Related posts Tagged : / / / /

Is WhatsApp safe? How does its end-to-end encryption work?

WhatsApp is the most used chat application in the world, handily surpassing rivals like Messenger, Signal, and Telegram. Given how much sensitive data we tend to share in online conversations, is the app safe to use? Moreover, should you be worried about potential hacks or data leaks, even with the encryption WhatsApp claims to offer?

In this article, let’s answer those questions by taking a closer look at WhatsApp’s security measures, including end-to-end encryption. Later, we’ll also discuss some additional features you can take advantage of to keep your chats safe from prying eyes.

WhatsApp security: What is end-to-end encryption?

whatsapp business communication

Source link

Related posts Tagged : / / / / /

What is it and why does every PC gamer need it?

Moonlight Gaming Android Steam Big Picture Mode

C. Scott Brown / Android Authority

By now, most people are familiar with cloud gaming, otherwise known as game streaming. With services such as Google Stadia, Xbox Game Pass Ultimate, Amazon Luna, etc., you can play PC games without needing any PC hardware. Servers owned by the companies run the games and stream the gameplay over the internet to your device. The Moonlight gaming tool is a lot like these services, but your own gaming rig acts as the server.

In this article, we’re going to tell you all about the Moonlight gaming tool and why every PC gamer should use it. We’re going to go over how it works, what you need to get set up, and more!

What is the Moonlight gaming tool?

Razer Book 13 Review With Xbox Controller

C. Scott Brown / Android Authority

As mentioned already, Moonlight is a program that allows you to stream games over the internet from your gaming PC (aka “host”) to your phone, tablet, laptop, or TV (aka “client”). Assuming you have fast enough internet connections for both the host and the client, you can play your PC games anywhere — even if your gaming rig is physically miles away.

Essentially, the Moonlight gaming tool is like your own personal Google Stadia.

Instead of streaming games from Google’s or Amazon’s servers, you stream from your gaming PC. And, instead of paying Google or Amazon to buy games and stream the content, you can stream the games you already own as much as you like for free. Plus, you can stream any game, not just the ones Google, Amazon, or Microsoft offer.

Moonlight is free and open-source, so you can install it on as many systems as you like without any cost. If you’re a software developer, you can even contribute to the further development of Moonlight.

Why not just use Steam Link?

Steam link Testing Network

If you’re a fan of Steam, you probably know Valve has its own app for streaming your Steam library. This app is called Steam Link and is inspired by the discontinued Steam Link hardware. Steam Link is available for free on a multitude of platforms (including Android). Like the Moonlight gaming tool, it allows you to use your gaming rig as a host to stream games to clients.

However, Steam Link has two major problems. The first is, quite obviously, that it’s designed to stream Steam games. If you buy your games through other methods, the Steam Link app will require you to “install” the game through Steam. While this is straightforward to do, it sometimes results in poor streaming.

See also: What is cloud gaming?

The other obstacle is that sometimes it won’t let you stream certain Steam games, either. Even if it’s a Steam game that works fine on your rig, some games will present with a black screen when you fire them up in Steam Link. This can be due to DRM issues.

In other words, Steam Link is a useful tool if you are primarily a Steam user and the games you want to play are supported. The benefit of the Moonlight gaming tool is that both of these limitations are gone. You can stream any game you want from any source. As long as it’s installed on your host PC, you can play it on any of your clients.

How does Moonlight work?

Moonlight Gaming App Android Game Selection Screen

C. Scott Brown / Android Authority

If you have everything set up correctly, you can fire up the Moonlight app on your client — let’s say a smartphone. Once the app is opened, you can navigate through your library of games installed back home on your host PC. Just select the game you want to play and your host PC will open it and start streaming gameplay to your phone. It’s really that simple!

Essentially, Moonlight is just a fancy way of mirroring your gaming PC’s desktop remotely. Using software created by Nvidia, Moonlight streams the visuals from your host PC to the client. Simultaneously, it streams your inputs via a controller or keyboard/mouse back to the PC. This creates an input/response loop.

Related reading: What’s the best controller for PC users?

Assuming your internet connections are fast enough at both points, it should only take milliseconds for your inputs on the client to be received by the host and then the visual response of those inputs to stream back to the client. Naturally, this creates a certain amount of latency, or lag. However, if everything is working properly, it should be a small enough latency that you would barely notice.

Still, the Moonlight gaming tool is not going to be very useful for competitive gaming. Single-player games, turn-based RPGs, visual novels, and other games where a millisecond of reaction time isn’t going to make or break your run are better suited to Moonlight.

What does your host PC need to use Moonlight?

NVIDIA Logo on wall

As mentioned in the previous section, the Moonlight gaming app is built on a protocol developed by Nvidia. Unfortunately, this does mean Moonlight only works on host PCs with Nvidia-based graphics cards. AMD and integrated Intel graphics users will need to rely on Steam Link and other apps, as Moonlight won’t work for you. However, this only applies to the host. Your clients do not need to be equipped with Nvidia hardware.

Here are the Nvidia cards Moonlight supports:

  • Nvidia GeForce GTX or RTX (600-series or later, and not in GT series)
  • Nvidia Quadro (Kepler or later)

Additionally, you’ll need the following:

Outside of the graphics card, Windows, and the correct Experience app, your gaming PC can be of any make and model. It does not need to be incredibly powerful, either. Since the rig will be “playing” the game while you stream it, though, it will need to be powerful enough to play said game. In other words, if your game doesn’t play well on your gaming PC, it won’t play well on your Moonlight client, either. Remember, you’re basically just mirroring your system’s display remotely, so your gaming rig needs to be up to the task!

What devices can you stream to?

Powered by Android logo far

Robert Triggs / Android Authority

The real treat of Moonlight is how easy it is to use on all the devices you already own. Imagine being in a hotel room and streaming games on your Chromebook from your PC back home. Imagine being on a train and playing a PC game on your smartphone. Or, think of how cool it would be to play your PC games at your parents’ house right on their TV. This is all possible!

As of right now, you can install the client version of the Moonlight gaming tool on systems of all kinds. Check out the list below:

Additionally, if you have homebrew-enabled versions of these systems, you can use them as Moonlight clients:

How can you set up the Moonlight gaming tool?

Moonlight Gaming Android Resident Evil 2

C. Scott Brown / Android Authority

The Moonlight team has an incredibly detailed setup guide here. It goes over not only how to get Moonlight running on a variety of hosts and clients, but also how to pull off neat tricks like using a rented cloud server to stream games, using Moonlight as a productivity tool, and much more.

However, for most folks, they’ll just want the basics. Here’s how to start using the Moonlight gaming tool with an Nvidia GTX/RTX GPU installed on a Windows PC as the host and an Android phone as the client.

  1. On your host PC, install the GeForce Experience app. If you already have it, just make sure you’re on the latest version.
  2. Start the Experience app and go to Settings > Shield. Here, make sure the GameStream toggle is switched on.
  3. Download, install, and start the Moonlight host app on your PC.
  4. Make sure your Android phone is connected to the same network as your host PC. Download, install, and start the Android Moonlight app.
  5. When you start the Android app, it should recognize your gaming PC in just a few seconds. Tap on the image that appears.
  6. You’ll get a PIN on your phone that you’ll then need to enter on your PC. Do so to accept the pairing of the two devices.
  7. Once paired, that’s it! Fire up a game on your Android phone and watch as it streams like magic.

Next, you’ll most likely want to connect your Android phone to your host PC even when you’re not on the same network. That’s really easy! Just follow the instructions here.

Once you’ve got everything going, you’ll want to tweak your settings within Moonlight to get the best experience. Moonlight has an FAQ and a troubleshooting guide to help out with that. Have fun!

Source link

Related posts Tagged : / / / /

Does Anybody Recognize this Font?

pornhub font

Do we need a caption? Name that font!

The post Does Anybody Recognize this Font? appeared first on People Of Walmart.

Source link

Related posts Tagged : / / / /

What kind of hair product does that?

big hair

The post What kind of hair product does that? appeared first on People Of Walmart.

Source link

Related posts Tagged : / / / / /

What Does SHINEDOWN’s Cryptic Alphabet On Their Instagram Mean?

Shinedown has just deleted all the other posts on their Instagram, and then posted a video of a very cryptic alphabet with the caption “So it begins… 😈 #nowiknowmyabcs… Do you?” There doesn’t seem to be anything to do with this new information yet, though they do seem to have a story going that only features the letter Z right now.

Advertisement. Scroll to continue reading.

What we do know is that Shinedown began working on a new record in February 2021. Shinedown vocalist Brent Smith also said to WSOU 89.5 FM back in August regarding new music “I can tell you right now the first single, you’re gonna hear it in the first month of 2022. And then hopefully a couple of months later, there’ll be a [new album] out.” So stay tuned!

The new Shinedown record will be their first since Attention Attention in 2018.

Screen Shot 2022 01 06 at 10.58.49 AM
Want More Metal? Subscribe To Our Daily Newsletter

Enter your e-mail below to get a daily update with all of our headlines.

Source link

Related posts Tagged : / / / / /

Where does someone acquire such clothes?


Does Walmart offer this kind of outfit?

The post Where does someone acquire such clothes? appeared first on People Of Walmart.

Source link

Related posts Tagged : / / / / /

Verizon-bound TCL 30 V 5G does mmWave, 30 XE 5G is headed for T-Mobile

TCL is on a mission to deliver affordable 5G connectivity to everyone and will release a number of products over the coming months. Today’s launches are focused on the US and include two smartphones, one each for two of the country’s leading carriers.

TCL 30 V 5G

Let’s start with the TCL 30 V 5G, which as you can probably guess is for Verizon, specifically its mmWave network. The phone is powered by a Snapdragon 480 (8 nm), which has support for the faster flavor of 5G (it also does sub-6, of course). This is the higher-end of the two models.

TCL 30 V 5G is a mmWave-enabled phone for Verizon, TCL 30 XE 5G is headed for T-Mobile

It has a slightly larger, sharper 6.67” display with 1080p+ resolution (20:9) and Gorilla Glass 3. The punch-holed camera takes 16 MP photos and can record 1080p@30 fps video. It does face unlocking too, but you can also use the rear-mounted fingerprint reader for securing your phone.

The other place where the V model stands out is the rear camera setup. The star of the show is the 50 MP main camera on the rear (sensor unknown), which is joined by the 5 MP ultra wide module and a 2 MP macro camera. Unfortunately, the main cam can only record 1080p video at 30 fps (no 4K, not even 60 fps).

TCL 30 V 5G is a mmWave-enabled phone for Verizon, TCL 30 XE 5G is headed for T-Mobile

The chipset is paired with 4 GB of RAM and 128 GB built-in storage (of which 98 GB are user accessible). This can be expanded with a microSD card up to 1 TB in size. The phone also accepts a single nano-SIM card and can act as a Wi-Fi 5 (ac) hotspot for up to 10 devices. HD voice is supported.

TCL 30 V 5G is a mmWave-enabled phone for Verizon, TCL 30 XE 5G is headed for T-Mobile

Wired connectivity on the TCL 30 V includes a 3.5 mm headphone jack (you can also use the dual speakers for audio), plus a USB-C port on on the bottom, which is responsible for charging the 4,500 mAh battery at 18W. The charger is included in the box and it needs 2 hours to get to 100%.

TCL 30 XE 5G

The TCL 30 XE 5G is the company’s first phone for T-Mobile and Metro, however, it will later be made available to other carriers as well. US carriers that is, since it and the V are US-exclusives. TCL has 30-series phones ready to launch in Europe, but they will be unveiled at the MWC.

TCL 30 V 5G is a mmWave-enabled phone for Verizon, TCL 30 XE 5G is headed for T-Mobile

Anyway, this model is marginally smaller with a 6.52” display, which supports 90 Hz refresh rate (the V display only does 60 Hz), along with 180 Hz touch sampling rate. The bad news is that it only has 720p+ resolution (20:9). Also, it switches to DragonTrail 3 for its protective glass and a V-notch for its 8 MP selfie camera. It does keep the fingerprint reader on the back, though.

Also here is a basic 13 MP main camera, which at least matches the 1080p@30 fps video capabilities of the higher resolution camera on the V model. There’s no ultra wide lens here, just a 2 MP macro and 2 MP depth modules.

TCL 30 V 5G is a mmWave-enabled phone for Verizon, TCL 30 XE 5G is headed for T-Mobile

The other change is the chipset – a Dimensity 700 (7 nm), which is actually slightly faster than the Snapdragon chip. It gets the same 4 GB of RAM, but half the storage, 64 GB (of which 43 GB are user accessible), plus a microSD slot for up to 512 GB more. Both phones are launching with Android 11 out of the box.

The phone supports HD Voice, along with VoLTE and VoWiFi and can be a Wi-Fi 5 (ac) hotspot for up to 10 devices. It lacks mmWave connectivity, but sub-6 5G should still be plenty fast.

TCL 30 V 5G is a mmWave-enabled phone for Verizon, TCL 30 XE 5G is headed for T-Mobile

Additionally, the TCL 30 XE has a 3.5 mm headphone jack, but only a single speaker (1W). The USB-C port supports the same 18W charging for the 4,500 mAh battery and again the charger is supplied in the retail package.


The TCL 30 V 5G and TCL 30 XE 5G will become available in the US in the coming weeks, pricing will be announced then. The XE will be launched on other US carriers later this year.

These two phones are just a glimpse of what’s to come from the TCL 30 series – more of them will be unveiled at the MWC (which starts at the end of February).

Source link

Related posts Tagged : / / / /

What is machine learning and how does it work?

From personal assistants like Google Assistant and Alexa to content recommendations from YouTube and Amazon, it’s hard to think of a service or technology that machine learning hasn’t radically improved over the past few years.

Simply put, machine learning is a subset of artificial intelligence that allows computers to learn from their own experiences — much like we do when learning or picking up a new skill. When implemented correctly, the technology can perform certain complex tasks better than any human, and often within seconds.

Given how pervasive machine learning has become in today’s society, you may wonder how it works and what its limitations are. To that end, here’s a simple primer on the technology. Don’t worry if you don’t have a background in computer science — this article is just a high-level overview of what happens under the hood.

What is machine learning?

Using Google Lens to identify a bunch of bananas as seen on the camera of the OnePlus 7 Pro.

Even though many people conflate the terms machine learning (ML) and artificial intelligence (AI), there’s actually a distinction between the two. To understand why, it’s worth talking about how artificial intelligence started off in the first place.

Early applications of AI, theorized around 50 years or so ago, were extremely basic by today’s standards. A chess game where you play against computer-controlled opponents, for instance, could once be considered revolutionary. It’s easy to see why — the ability to solve problems based on a set of rules can qualify as basic “intelligence”, after all. These days, however, we’d consider such a system extremely rudimentary as it lacks experience — a key component of human intelligence. This is where machine learning comes in.

Machine learning enables computers to learn or train themselves from massive amounts of existing data.

Machine learning adds an entirely new dimension to artificial intelligence — it enables computers to learn or train themselves from massive amounts of existing data. In this context, “learning” means forming relationships and extracting new patterns from a given set of data. This is a lot like how human intelligence works as well. When we come across something unfamiliar, we use our senses to study its features and can use our memory to recognize it the next time.

How does machine learning work?

Google io 2021 introduction to ML dataset types

Broadly speaking, a machine learning problem can be solved in two distinct phases: training and inference. In the first stage, a computer algorithm analyzes a bunch of sample or training data to extract relevant features and patterns. Each algorithm is generally optimized for a certain type of data. The data can be anything — numbers, images, text, and even speech.

The success of the training process, meanwhile, is directly linked to three factors: the algorithm itself, the amount of data you feed it, and the dataset’s quality. Every now and then, researchers propose new algorithms or techniques that improve accuracy and reduce errors, as you’d expect from cutting-edge technology. Increasing the amount of data you offer the algorithm, on the other hand, can also help cover more edge cases.

Machine learning programs involve two distinct stages: training and inference.

The output of a machine learning algorithm is often referred to as a model. You can equate ML models to a dictionary or reference manual as it’s used for future predictions. In other words, we use trained models to infer results from new data that our program has never seen before.

The training process usually involves analyzing thousands or even millions of samples. As you’d expect, this is a fairly hardware-intensive process that needs to be completed ahead of time. Once the training process is complete and all of the relevant features have been analyzed, however, some resulting models can be small enough to fit on common devices like smartphones.

Consider a machine learning application that interprets handwritten text, for example. As part of the training process, a developer first feeds an ML algorithm with sample images. This eventually gives them an ML model that can be packaged and deployed within something like an Android application. When users install the app and feed it with new images of their own, their devices can reference the model to infer new results. In the real world, you won’t see any of this, of course — the app will simply convert handwritten words into digital text.

Training a machine learning model is a hardware-intensive task that may take several hours or even days.

While early machine learning applications relied on the cloud for training and inference, recent technological advancements have enabled local, on-device inference as well. Of course, this largely depends on the algorithm and hardware used — as we’ll discuss in a later section.

For now, here’s a rundown of the various machine learning training techniques and how they differ from each other.

Supervised, unsupervised, and reinforcement learning

Training and inference presentation slide at Google IO

In a nutshell, the data used to train the algorithm can fall under one of two categories: labeled and unlabelled. As you may have guessed from the title, supervised learning involves a labeled dataset, which helps the training algorithm know what it’s looking for.

Take a model thats sole purpose is to identify images of dogs and cats, for example. If you feed the algorithm with labeled images of the two animals, it is simply a case of supervised learning. However, if you expect the algorithm to figure out the differentiating features all on its own (that is, without labels indicating the image contains a dog or cat), it becomes unsupervised learning.

Unsupervised learning is especially useful in instances where you might not know what patterns to look for. Furthermore, new data is constantly fed back into the system for training — without any manual input required from a human.

Say an ecommerce website like Amazon wants to create a targeted marketing campaign. They typically already know a lot about their customers, including their age, purchasing history, browsing habits, location, and much more. An unsupervised learning algorithm would be able to form relationships between these variables all by itself. It might help marketers realize that customers from a particular area tend to purchase certain types of clothing or that young shoppers are more likely to spend on recreational items. Whatever the case may be, it’s a completely hands-off number-crunching, discovery process.

Unsupervised learning excels at finding patterns and relationships in a dataset that a human might otherwise overlook.

All in all, unsupervised learning is a useful technique in scenarios that are not quite as straightforward as those with known outcomes.

Finally, we have reinforcement learning, which works particularly well in applications that have many ways to reach a clear goal. It’s a system of trial and error — positive actions are rewarded, while negative ones are discarded. This means the model can evolve based on its own experiences over time.

A game of chess is the perfect application for reinforcement learning because the algorithm can learn from its mistakes. In fact, Google’s DeepMind subsidiary built an ML program that used reinforcement learning to become better at the board game, Go. Between 2016 and 2017, it went on to defeat multiple Go world champions in competitive settings — a remarkable achievement, to say the least.

What about neural networks and what is deep learning?

DNA sequencing deep learning slide at Google IO

A neural network is a specific subtype of machine learning inspired by the behavior of the human brain. Biological neurons in an animal body are responsible for sensory processing. They take information from our surroundings and transmit electrical signals over long distances to the brain. Our bodies have billions of such neurons that all communicate with each other, helping us see, feel, hear, and everything in between.

An artificial neural network mimics the behavior of biological neurons in an animal body.

In that vein, artificial neurons in a neural network talk to each other as well. They break down complex problems into smaller chunks or “layers”. Each layer is made up of neurons (also called nodes) that accomplish a specific task and communicate their results with nodes in the next layer. In a neural network trained to recognize objects, for example, you’ll have one layer with neurons that detect edges, another that looks at changes in color, and so on.

Layers are linked to each other, so “activating” a particular chain of neurons gives you a certain predictable output. Because of this multi-layer approach, neural networks excel at solving complex problems. Consider autonomous or self-driving vehicles, for instance. They use a myriad of sensors and cameras to detect roads, signage, pedestrians, and obstacles. All of these variables have some complex relationship with each other, making it a perfect application for a multi-layered neural network.

Deep learning is a term that’s often used to describe a neural network with many layers. The term “deep” here simply refers to the layer depth.

Where do we see machine learning in our daily lives?

Sony Xperia 1 III Google Assistant

Robert Triggs / Android Authority

Machine learning influences pretty much every aspect of our digital lives. Social media platforms like Instagram, for example, often show you targeted advertisements based on the posts you interact with. If you like an image containing food, you might get advertisements related to meal kits or nearby restaurants. Similarly, streaming services like YouTube and Netflix can infer new genres and topics you may be interested in, based on your watch history and duration.

Even on personal devices like smartphones, features such as facial recognition rely heavily on machine learning. Take the Google Photos app, for example. It not only detects faces from your photos but also uses machine learning to identify unique facial features for each individual. The pictures you upload help improve the system, allowing it to make more accurate predictions in the future. The app also often prompts you to verify if a certain match is accurate — indicating that the system has a low confidence level in that particular prediction.

See also: How on-device machine learning has changed the way we use our phones

Indeed, machine learning is all about achieving reasonably high accuracy in the least amount of time. It’s not always successful, of course.

In 2016, Microsoft unveiled a state-of-the-art chatbot named Tay. As a showcase of its human-like conversational abilities, the company allowed Tay to interact with the public through a Twitter account. However, the project was taken offline within just 24 hours after the bot began responding with derogatory remarks and other inappropriate dialogue.

The above example highlights an important point — machine learning is only really useful if the training data is reasonably high quality and aligns with your end goal. Tay was trained on live Twitter submissions, meaning it was easily manipulated or trained by malicious actors.

Machine learning isn’t a one-size-fits-all arrangement. It requires careful planning, a varied and clean data set, and occasional supervision.

Dangers of machine learning aside, the technology can also help in scenarios where traditional methods just cannot keep pace.

Rendering graphically complex video games represents one such application. For decades, we’ve relied on yearly performance increases to achieve this task. However, processing power has started to plateau of late — even as other technologies like display resolutions and refresh rates continue to march upwards.

ML-based upscaling technologies like Nvidia’s Deep Learning Supersampling (DLSS) are helping bridge this gap. The way DLSS works is rather straightforward — the GPU first renders an image at a lower resolution and then uses a trained ML model to upscale it. The results are impressive, to say the least — far better than traditional, non-ML upscaling technologies. Similarly, super-resolution upscaling is used to improve smartphone photography image quality. Machine learning isn’t just for basic predictions anymore.

How does hardware affect machine learning performance?

Crypto mining with GPU

Edgar Cervantes / Android Authority

Many of the aforementioned machine learning applications, including facial recognition and ML-based image upscaling, were once impossible to accomplish on consumer-grade hardware. In other words, you had to connect to a powerful server sitting in a data center to accomplish most ML-related tasks.

Even today, training an ML model is extremely hardware intensive and pretty much requires dedicated hardware for larger projects. Since training involves running a small number of algorithms repeatedly, though, manufacturers often design custom chips to achieve better performance and efficiency. These are called application-specific integrated circuits or ASICs. Large-scale ML projects typically make use of either ASICs or GPUs for training, and not general-purpose CPUs. These offer higher performance and lower power consumption than a traditional CPU.

Machine learning accelerators help improve inference efficiency, making it possible to deploy ML apps to more and more devices.

Things have started to change, however, at least on the inference side of things. On-device machine learning is starting to become more commonplace on devices like smartphones and laptops. This is thanks to the inclusion of dedicated, hardware-level ML accelerators within modern processors and SoCs.

Read more: Why are smartphone chips suddenly including an AI processor?

Machine learning accelerators are extremely power efficient compared to an ordinary processor. This is why the DLSS upscaling technology we spoke about earlier, for example, is only available on newer Nvidia graphics cards with the requisite ML acceleration hardware. In smartphones, we’ve seen specific low-power accelerators designed for voice detection and a growing trend in ML processing power integrated tightly with more traditional image processors for better photography.

Going forward, we’re likely to see feature segmentation and exclusivity depending on each new hardware generation’s machine learning acceleration capabilities. In fact, we’re already witnessing that happen in the smartphone industry.

Machine learning at the edge: Smartphones and consumer devices

Pixel 6 showing Live Caption

Ryan Haines / Android Authority

ML accelerators have been built into smartphone SoCs for a while now. However, they’ve become a key focal point of late due to the rise of use-cases like computational photography and voice recognition.

In 2021, Google announced its first semi-custom SoC, nicknamed Tensor, for the Pixel 6. One of Tensor’s key differentiators was its custom TPU — or Tensor Processing Unit. Google claims that its chip delivers significantly faster ML inference versus the competition, especially in areas such as natural language processing. This, in turn, allowed Google to use Tensor for a suite of new features on the Pixel 6, including real-time language translation, HDR-enabled video recording, and faster speech-to-text functionality. Smartphone processors from MediaTek, Qualcomm, and Samsung have their own takes on dedicated ML hardware too.

See also: What is Google Tensor?

That’s not to say that cloud-based inference isn’t still in use today — quite the opposite, in fact. While on-device machine learning has become increasingly common, it’s still far from ideal. This is especially true when we consider complex problems like voice recognition and image classification. Voice assistants like Amazon’s Alexa and Google Assistant are only as good as they are today because they rely on powerful cloud infrastructure — for both inference as well as model re-training.

On-device machine learning enabled a plethora of futuristic smartphone features, including computational photography, real-time translation, and live captions.

However, as with most new technologies, new solutions and techniques are constantly on the horizon. In 2017, Google’s HDRnet algorithm revolutionized smartphone imaging, while MobileNet brought down the size of ML models and made on-device inference feasible. More recently, the company highlighted how it uses a privacy-preserving technique called federated learning to train machine learning models with user-generated data.

Apple, meanwhile, also integrates hardware ML accelerators within all of its consumer chips these days. The Apple M1 family of SoCs included in the latest Macbooks, for instance, has enough machine learning grunt to perform training tasks on the device itself.

And with that, you’re now up to speed on the basics of machine learning! If you’re looking to get started with the technology on your own, consider checking out our guide on adding machine learning to an Android app.

Source link

Related posts Tagged : / / / /

New Artist Spotlight: Toigo Does Indie Pop Right With New Single, ‘We’ve Got Tonight to Leave Me Broken’ [Video]

With multiple excellent releases dating back to 2014, an epic look that’s a cross between Rick Rubin and Yosemite Bear and a falsetto that rivals Jeff Buckley or Alexis Taylor from Hot Chip, Toigo should truly be mega-famous by now. One gets the impression from his music, however, that fame isn’t really what Toigo (full name Zachary Toigo) is after. With with a brand of indie pop rock as well-crafted as it is joyful and interesting, it seems that Toigo makes art purely for the love of it. Nonetheless, with his latest electronica-infused singles “We’ve Got Tonight to Leave Me Broken,” “Starchild” and “Another Shade of Blue,” let’s see if we can put him on the EDM map.

Right out of the gate with his debut album Leafyleaks, Zachary’s then more acoustic-based work was pretty freaking flawless. With lashings of Radiohead, Arcade Fire and Death Cab for Cutie, Zachary’s sound was as well-developed as his voice pretty much from jump. In the interstitial work he’s done between that first album and now, Toigo has only further developed his craft, playing with loads of other styles and genres but really just putting his own stamp on and excelling at them all. From grunge to Ben Folds-style piano rock to a bit of synth pop on his 2020 singles “Milos” and “Perfectly Clear,” it also didn’t seem like Toigo was trying too hard to venturing out of his depth. He’s just that versatile.

Rolling up the Toigo timeline to present, “We’ve Got Tonight to Leave Me Broken,” “Starchild” and “Another Shade of Blue” are technically the first teasers off his upcoming EP, due out in late February or March. Still very much in indie pop and now working with Grammy winning producer Brian Howes, these two singles are laced with electronic production that brings Toigo’s work to yet another echelon of awesome. Hearkening back to the heyday of Hot Chip, Cut Copy and other electro indie pop acts of the late 00s, the first thing that jumps out on these tracks is his voice. Somewhat obscured by guitars in previous offerings, now the pitch-perfect, rich and incredible range of his vox shines thought, especially on “We’ve Got Tonight…” where a dazzling falsetto chorus takes the song from great to “why the hell isn’t this all over the radio?” Seriously, why?

Since music seems to just flow out of this guy and he nails every genre he touches, it’ll be interesting to see where Toigo goes next and what the rest of his forthcoming EP has in store. In the meantime, we’re happy to have that brilliant falsetto on the synth side of things. No matter what genre, however, this is a bandwagon you’ll want to jump on.

“We’ve Got Tonight to Leave Me Broken” and “Starchild” are available to stream on Spotify, and the rest of Toigo’s (as Zachary) discography is available on his Bandcamp page. For more videos and some fun stories form Toigo, check out his YouTube page.

Source link

Related posts Tagged : / / / / /