To enhance your user experience on our website, this site uses cookies.
If you continue to browse, you accept the use of cookies on our site.
See our Privacy Policy
for more information.
In addition to GPU manufacturing, Nvidia provides an API called CUDA that allows the creation of massively parallel programs which utilize GPUs.[6][7] They are deployed in supercomputing sites around the world.[8][9] More recently, it has moved into the mobile computing market, where it produces Tegra mobile processors for smartphones and tablets as well as vehicle navigation and entertainment systems.[10][11][12] Its competitors include AMD, Intel,[13]Qualcomm[14] and AI-accelerator companies such as Graphcore. It also makes AI-powered software for audio and video processing, e.g. Nvidia Maxine.[15]
Nvidia's offer to acquire Arm from SoftBank in September 2020 failed to materialize following extended regulatory scrutiny, leading to the termination of the deal in February 2022 in what would have been the largest semiconductor acquisition.[16][17]
In 1993, the three co-founders believed that the proper direction for the next wave of computing was accelerated computing such as graphics-based computing because it could solve problems that general-purpose computing could not.[24] They also observed that video games were simultaneously one of the most computationally challenging problems and would have incredibly high sales volume; the two conditions do not happen very often.[24] Video games became the company's flywheel to reach large markets and fund huge R&D to solve massive computational problems.[24] With $40,000 in the bank, the company was born.[24] The company subsequently received $20 million of venture capital funding from Sequoia Capital and others.[25] Nvidia initially had no name and the co-founders named all their files NV, as in "next version".[24] The need to incorporate the company prompted the co-founders to review all words with those two letters, leading them to "invidia", the Latin word for "envy".[24] The company's original headquarters office was in Sunnyvale, California.[24] Nvidia went public on January 22, 1999.[26][27][28]
The release of the RIVA TNT in 1998, solidified Nvidia's reputation for developing capable graphics adapters. In late 1999, Nvidia released the GeForce 256 (NV10), most notably introducing on-board transformation and lighting (T&L) to consumer-level 3D hardware. Running at 120 MHz and featuring four-pixel pipelines, it implemented advanced video acceleration, motion compensation, and hardware sub-picture alpha blending. The GeForce outperformed existing products by a wide margin.
Due to the success of its products, Nvidia won the contract to develop the graphics hardware for Microsoft's Xbox game console, which earned Nvidia a $200 million advance. However, the project took many of its best engineers away from other projects. In the short term this did not matter, and the GeForce2 GTS shipped in the summer of 2000. In December 2000, Nvidia reached an agreement to acquire the intellectual assets of its one-time rival 3dfx, a pioneer in consumer 3D graphics technology leading the field from the mid-1990s until 2000.[29][30] The acquisition process was finalized in April 2002.[31]
In July 2002, Nvidia acquired Exluna for an undisclosed sum. Exluna made software-rendering tools and the personnel were merged into the Cg project.[32] In August 2003, Nvidia acquired MediaQ for approximately US$70 million.[33] On April 22, 2004, Nvidia acquired iReady, also a provider of high-performance TCP/IP and iSCSI offload solutions.[34] In December 2004, it was announced that Nvidia would assist Sony with the design of the graphics processor (RSX) in the PlayStation 3 game console. On December 14, 2005, Nvidia acquired ULI Electronics, which at the time supplied third-party southbridge parts for chipsets to ATI, Nvidia's competitor.[35] In March 2006, Nvidia acquired Hybrid Graphics.[36] In December 2006, Nvidia, along with its main rival in the graphics industry AMD (which had acquired ATI), received subpoenas from the U.S. Department of Justice regarding possible antitrust violations in the graphics card industry.[37]
Forbes named Nvidia its Company of the Year for 2007, citing the accomplishments it made during the said period as well as during the previous five years.[38] On January 5, 2007, Nvidia announced that it had completed the acquisition of PortalPlayer, Inc.[39] In February 2008, Nvidia acquired Ageia, developer of the PhysXphysics engine and physics processing unit. Nvidia announced that it planned to integrate the PhysX technology into its future GPU products.[40][41]
In July 2008, Nvidia took a write-down of approximately $200 million on its first-quarter revenue, after reporting that certain mobile chipsets and GPUs produced by the company had "abnormal failure rates" due to manufacturing defects. Nvidia, however, did not reveal the affected products. In September 2008, Nvidia became the subject of a class action lawsuit over the defects, claiming that the faulty GPUs had been incorporated into certain laptop models manufactured by Apple Inc., Dell, and HP. In September 2010, Nvidia reached a settlement, in which it would reimburse owners of the affected laptops for repairs or, in some cases, replacement.[42][43] On January 10, 2011, Nvidia signed a six-year, $1.5 billion cross-licensing agreement with Intel, ending all litigation between the two companies.[44]
In November 2011, after initially unveiling it at Mobile World Congress, Nvidia released its Tegra 3ARMsystem-on-a-chip for mobile devices. Nvidia claimed that the chip featured the first-ever quad-core mobile CPU.[45][46] In May 2011, it was announced that NVIDIA had agreed to acquire Icera, a baseband chip making company in the UK, for $367 million.[47] In January 2013, NVIDIA unveiled the Tegra 4, as well as the Nvidia Shield, an Android-based handheld game console powered by the new system-on-chip.[48] On July 29, 2013, Nvidia announced that they acquired PGI from STMicroelectronics.[49]
In 2014, Nvidia ported Valve games Portal and Half Life 2 to its Nvidia Shield tablet as Lightspeed Studio.[50][51] Since 2014, Nvidia has diversified its business focusing on three markets: gaming, automotive electronics, and mobile devices.[52]
On May 6, 2016, Nvidia unveiled the first GPUs of the GeForce 10 series, the GTX 1080 and 1070, based on the company's new Pascal microarchitecture. Nvidia claimed that both models outperformed its Maxwell-based Titan X model; the models incorporate GDDR5X and GDDR5 memory respectively, and use a 16 nm manufacturing process. The architecture also supports a new hardware feature known as simultaneous multi-projection (SMP), which is designed to improve the quality of multi-monitor and virtual reality rendering.[53][54][55] Laptops that include these GPUs and are sufficiently thin – as of late 2017, under 0.8 inches (20 mm) – have been designated as meeting NVIDIA's "Max-Q" design standard.[56]
In July 2016, Nvidia agreed to a settlement for a false advertising lawsuit regarding its GTX 970 model, as the models were unable to use all of their advertised 4 GB of RAM due to limitations brought by the design of its hardware.[57] In May 2017, Nvidia announced a partnership with Toyota which will use Nvidia's Drive PX-series artificial intelligence platform for its autonomous vehicles.[58] In July 2017, Nvidia and Chinese search giant Baidu announced a far-reaching AI partnership that includes cloud computing, autonomous driving, consumer devices, and Baidu's open-source AI framework PaddlePaddle. Baidu unveiled that Nvidia's Drive PX 2 AI will be the foundation of its autonomous-vehicle platform.[59]
Nvidia officially released the Titan V on December 7, 2017.[60][61]
Nvidia officially released the Nvidia Quadro GV100 on March 27, 2018.[62] Nvidia officially released the RTX 2080 GPUs on September 27, 2018. In 2018, Google announced that Nvidia's Tesla P4 graphic cards would be integrated into Google Cloud service's artificial intelligence.[63]
In May 2018, on the Nvidia user forum, a thread was started[64] asking the company to update users when they would release web drivers for its cards installed on legacy Mac Pro machines up to mid-2012 5,1 running the macOS Mojave operating system 10.14. Web drivers are required to enable graphics acceleration and multiple display monitor capabilities of the GPU. On its Mojave update info website, Apple stated that macOS Mojave would run on legacy machines with 'Metal compatible' graphics cards[65] and listed Metal compatible GPUs, including some manufactured by Nvidia.[66] However, this list did not include Metal compatible cards that currently work in macOS High Sierra using Nvidia-developed web drivers. In September, Nvidia responded, "Apple fully controls drivers for Mac OS. But if Apple allows, our engineers are ready and eager to help Apple deliver great drivers for Mac OS 10.14 (Mojave)."[67] In October, Nvidia followed this up with another public announcement, "Apple fully controls drivers for Mac OS. Unfortunately, Nvidia currently cannot release a driver unless it is approved by Apple,"[68] suggesting a possible rift between the two companies.[69] By January 2019, with still no sign of the enabling web drivers, Apple Insider weighed into the controversy with a claim that Apple management "doesn't want Nvidia support in macOS".[70] The following month, Apple Insider followed this up with another claim that Nvidia support was abandoned because of "relational issues in the past",[71] and that Apple was developing its own GPU technology.[72] Without Apple-approved Nvidia web drivers, Apple users are faced with replacing their Nvidia cards with a competing supported brand, such as AMD Radeon from the list recommended by Apple.[73]
On March 11, 2019, Nvidia announced a deal to buy Mellanox Technologies for $6.9 billion[74] to substantially expand its footprint in the high-performance computing market. In May 2019, Nvidia announced new RTX Studio laptops. The creators say that the new laptop is going to be seven times faster than a top-end MacBook Pro with a Core i9 and AMD's Radeon Pro Vega 20 graphics in apps like Maya and RedCine-X Pro.[75] In August 2019, Nvidia announced Minecraft RTX, an official Nvidia-developed patch for the game Minecraft adding real-time DXR ray tracing exclusively to the Windows 10 version of the game. The whole game is, in Nvidia's words, "refit" with path tracing, which dramatically affects the way light, reflections, and shadows work inside the engine.[76]
In May 2020, Nvidia's top scientists developed an open-sourceventilator in order to address the shortage resulting from the global coronavirus pandemic.[77] On May 14, 2020, Nvidia officially announced their Ampere GPU microarchitecture and the Nvidia A100 GPU accelerator.[78][79] In July 2020, it was reported that Nvidia was in talks with SoftBank to buy Arm, a UK-based chip designer, for $32 billion.[80]
On September 1, 2020, Nvidia officially announced the GeForce 30 series based on the company's new Ampere microarchitecture.[81][82]
On September 13, 2020, it was announced that Nvidia would buy Arm from SoftBank Group for $40 billion, subject to the usual scrutiny, with the latter retaining a 10% share of Nvidia.[83][84][85][86]
In October 2020, Nvidia announced its plan to build the most powerful computer in Cambridge, England. Named Cambridge-1, the computer will employ AI to support healthcare research, with an expected completion by the end of 2020, at a cost of approximately £40 million. According to Jensen Huang, "The Cambridge-1 supercomputer will serve as a hub of innovation for the UK, and further the groundbreaking work being done by the nation's researchers in critical healthcare and drug discovery."[87]
Also in October 2020, along with the release of Nvidia RTX A6000, Nvidia announced it is retiring its workstation GPU brand Quadro, shifting its product name to Nvidia RTX for future products and the manufacturing to be Nvidia Ampere architecture based.[5]
In August 2021, the proposed takeover of Arm was stalled after the UK's Competition and Markets Authority raised "significant competition concerns".[88] In October 2021, the European Commission opened a competition investigation into the takeover. The Commission stated that Nvidia's acquisition could restrict competitors' access to Arm's products and provide Nvidia with too much internal information on its competitors due to their deals with Arm. SoftBank (the parent company of Arm) and Nvidia announced in early February 2022 that they "had agreed not to move forward with the transaction 'because of significant regulatory challenges'".[89] The investigation is set to end on March 15, 2022.[90][91] That same month, Nvidia was reportedly compromised by a cyberattack. The attack coincided with the 2022 Russian invasion of Ukraine, though there is no indication that the attack came from Russia or Russian hackers.[92]
In March 2022, Nvidia's CEO Jensen Huang mentioned that they are open to having Intel manufacture their chips in the future.[93] This was the first time the company mentioned that they would work together with Intel's upcoming foundry services.
In April 2022, it was reported that Nvidia planned to open a new research center in Yerevan, Armenia.[94]
In September 2022, Nvidia announced its next-generation automotive-grade chip, Drive Thor.[95][96]
Following U.S. Department of Commerce regulations which placed an embargo on exports to China of advanced microchips, which went into effect in October 2022, Nvidia saw its data center chip added to the export control list. The next month, the company unveiled a new advanced chip in China, called the A800 GPU, that met the export control rules.[97]
For the fiscal year 2020, Nvidia reported earnings of US$2.796 billion, with an annual revenue of US$10.918 billion, a decline of 6.8% over the previous fiscal cycle. Nvidia's shares traded at over $531 per share, and its market capitalization was valued at over US$328.7 billion in January 2021.[98]
For the Q2 of 2020, Nvidia reported sales of $3.87 billion, which was a 50% rise from the same period in 2019. The surge in sales and people's higher demand for computer technology. According to the financial chief of the company, Colette Kress, the effects of the pandemic will "likely reflect this evolution in enterprise workforce trends with a greater focus on technologies, such as Nvidia laptops and virtual workstations, that enable remote work and virtual collaboration."[99]
Nvidia's GPU Technology Conference (GTC), now called Nvidia GTC, is a series of technical conferences held around the world.[100] It originated in 2009 in San Jose, California, with an initial focus on the potential for solving computing challenges through GPUs.[101] In recent years, the conference focus has shifted to various applications of artificial intelligence and deep learning, including: self-driving cars, healthcare, high performance computing, and Nvidia Deep Learning Institute (DLI) training.[102] GTC 2018 attracted over 8400 attendees.[100] GTC 2020 was converted to a digital event and drew roughly 59,000 registrants.[103]
Nvidia Drive automotive solutions, a range of hardware and software products for designers and manufacturers of autonomous vehicles. The Drive PX-series is a high-performance computer platform aimed at autonomous driving through deep learning,[104] while Driveworks is an operating system for driverless cars.[105]
Instead, Nvidia provides its own binary GeForce graphics drivers for X.Org and an open-source library that interfaces with the Linux, FreeBSD or Solaris kernels and the proprietary graphics software. Nvidia also provided but stopped supporting an obfuscated open-source driver that only supports two-dimensional hardware acceleration and ships with the X.Org distribution.[111]
The proprietary nature of Nvidia's drivers has generated dissatisfaction within free-software communities.[112] Some Linux and BSD users insist on using only open-source drivers and regard Nvidia's insistence on providing nothing more than a binary-only driver as inadequate, given that competing manufacturers such as Intel offer support and documentation for open-source developers and that others (like AMD) release partial documentation and provide some active development.[113][114]
Because of the closed nature of the drivers, Nvidia video cards cannot deliver adequate features on some platforms and architectures, given that the company only provides x86/x64 and ARMv7-A driver builds.[115] As a result, support for 3D graphics acceleration in Linux on PowerPC does not exist, nor does support for Linux on the hypervisor-restricted PlayStation 3 console.
Some users claim that Nvidia's Linux drivers impose artificial restrictions, like limiting the number of monitors that can be used at the same time, but the company has not commented on these accusations.[116]
In 2014, with Maxwell GPUs, Nvidia started to require firmware by them to unlock all features of its graphics cards. Up to now, this state has not changed and makes writing open-source drivers difficult.[117][118][119]
On 12 May 2022, Nvidia announced that they are opensourcing their GPU kernel drivers. They are still maintaining closed source userland utilities, hence making users still dependent on their proprietary software.[120][121][122]
Nvidia GPUs are used in deep learning, and accelerated analytics due to Nvidia's API CUDA which allows programmers to utilize the higher number of cores present in GPUs to parallelizeBLAS operations which are extensively used in machine learning algorithms.[7] They were included in many Tesla vehicles before Elon Musk announced at Tesla Autonomy Day in 2019 that the company developed its own SoC and full self-driving computer now and would stop using Nvidia hardware for their vehicles.[123][124] These GPUs are used by researchers, laboratories, tech companies and enterprise companies.[125] In 2009, Nvidia was involved in what was called the "big bang" of deep learning, "as deep-learning neural networks were combined with Nvidia graphics processing units (GPUs)".[126] That year, the Google Brain used Nvidia GPUs to create Deep Neural Networks capable of machine learning, where Andrew Ng determined that GPUs could increase the speed of deep-learning systems by about 100 times.[127]
In April 2016, Nvidia produced the DGX-1 based on an 8 GPU cluster, to improve the ability of users to use deep learning by combining GPUs with integrated deep learning software.[128] It also developed Nvidia Tesla K80 and P100 GPU-based virtual machines, which are available through Google Cloud, which Google installed in November 2016.[129]Microsoft added GPU servers in a preview offering of its N series based on Nvidia's Tesla K80s, each containing 4992 processing cores. Later that year, AWS's P2 instance was produced using up to 16 Nvidia Tesla K80 GPUs. That month Nvidia also partnered with IBM to create a software kit that boosts the AI capabilities of Watson,[130] called IBM PowerAI.[131][132] Nvidia also offers its own Nvidia Deep Learning software development kit.[133] In 2017, the GPUs were also brought online at the Riken Center for Advanced Intelligence Project for Fujitsu.[134] The company's deep learning technology led to a boost in its 2017 earnings.[135]
In May 2018, researchers at the artificial intelligence department of Nvidia realized the possibility that a robot can learn to perform a job simply by observing the person doing the same job. They have created a system that, after a short revision and testing, can already be used to control the universal robots of the next generation. In addition to GPU manufacturing, Nvidia provides parallel processing capabilities to researchers and scientists that allow them to efficiently run high-performance applications.[136]
Nvidia's Inception Program was created to support startups making exceptional advances in the fields of artificial intelligence and data science. Award winners are announced at Nvidia's GTC Conference. As of March 2018, there were 2,800 startups in the Inception Program. As of August 2021, Nvidia Inception has surpassed 8,500 members in 90 countries, with cumulative funding of US$60 billion.[137][138][139]
Issues with the GeForce GTX 970's specifications were first brought up by users when they found out that the cards, while featuring 4 GB of memory, rarely accessed memory over the 3.5 GB boundary. Further testing and investigation eventually led to Nvidia issuing a statement that the card's initially announced specifications had been altered without notice before the card was made commercially available, and that the card took a performance hit once memory over the 3.5 GB limit were put into use.[140][141][142]
The card's back-end hardware specifications, initially announced as being identical to those of the GeForce GTX 980, differed in the amount of L2 cache (1.75 MB versus 2 MB in the GeForce GTX 980) and the number of ROPs (56 versus 64 in the 980). Additionally, it was revealed that the card was designed to access its memory as a 3.5 GB section, plus a 0.5 GB one, access to the latter being 7 times slower than the first one.[143] The company then went on to promise a specific driver modification in order to alleviate the performance issues produced by the cutbacks suffered by the card.[144] However, Nvidia later clarified that the promise had been a miscommunication and there would be no specific driver update for the GTX 970.[145] Nvidia claimed that it would assist customers who wanted refunds in obtaining them.[146] On February 26, 2015, Nvidia CEO Jen-Hsun Huang went on record in Nvidia's official blog to apologize for the incident.[147] In February 2015 a class-action lawsuit alleging false advertising was filed against Nvidia and Gigabyte Technology in the U.S. District Court for Northern California.[148][149]
Nvidia revealed that it is able to disable individual units, each containing 256 KB of L2 cache and 8 ROPs, without disabling whole memory controllers.[150] This comes at the cost of dividing the memory bus into high speed and low speed segments that cannot be accessed at the same time unless one segment is reading while the other segment is writing because the L2/ROP unit managing both of the GDDR5 controllers shares the read return channel and the write data bus between the two GDDR5 controllers and itself.[150] This is used in the GeForce GTX 970, which therefore can be described as having 3.5 GB in its high speed segment on a 224-bit bus and 0.5 GB in a low speed segment on a 32-bit bus.[150]
On July 27, 2016, Nvidia agreed to a preliminary settlement of the U.S. class action lawsuit,[148] offering a $30 refund on GTX 970 purchases. The agreed upon refund represents the portion of the cost of the storage and performance capabilities the consumers assumed they were obtaining when they purchased the card.[151]
It appears that while this core feature is in fact exposed by the driver,[157] Nvidia partially implemented it through a driver-based shim, coming at a high performance cost.[156] Unlike AMD's competing GCN-based graphics cards which include a full implementation of hardware-based asynchronous compute,[158][159] Nvidia planned to rely on the driver to implement a software queue and a software distributor to forward asynchronous tasks to the hardware schedulers, capable of distributing the workload to the correct units.[160] Asynchronous compute on Maxwell therefore requires that both a game and the GPU driver be specifically coded for asynchronous compute on Maxwell in order to enable this capability.[161] The 3DMark Time Spy benchmark shows no noticeable performance difference between asynchronous compute being enabled or disabled.[161] Asynchronous compute is disabled by the driver for Maxwell.[161]
Oxide claims that this led to Nvidia pressuring them not to include the asynchronous compute feature in their benchmark at all, so that the 900 series would not be at a disadvantage against AMD's products which implement asynchronous compute in hardware.[155]
Maxwell requires that the GPU be statically partitioned for asynchronous compute to allow tasks to run concurrently.[162] Each partition is assigned to a hardware queue. If any of the queues that are assigned to a partition empty out or are unable to submit work for any reason (e.g. a task in the queue must be delayed until a hazard is resolved), the partition and all of the resources in that partition reserved for that queue will idle.[162] Asynchronous compute therefore could easily hurt performance on Maxwell if it is not coded to work with Maxwell's static scheduler.[162] Furthermore, graphics tasks saturate Nvidia GPUs much more easily than they do to AMD's GCN-based GPUs which are much more heavily weighted towards compute, so Nvidia GPUs have fewer scheduling holes that could be filled by asynchronous compute than AMD's.[162] For these reasons, the driver forces a Maxwell GPU to place all tasks into one queue and execute each task in serial, and give each task the undivided resources of the GPU no matter whether or not each task can saturate the GPU or not.[162]
The Nvidia GeForce Partner Program was a marketing program designed to provide partnering companies with benefits such as public relations support, video game bundling, and marketing development funds.[163] The program proved to be controversial, with complaints about it possibly being an anti-competitive practice.[164]
First announced in a blog post on March 1, 2018,[165] it was canceled on May 4, 2018.[166]
On December 10, 2020, Nvidia told popular YouTube tech reviewer Steven Walton of Hardware Unboxed that it would no longer supply him with GeForce Founders Edition graphics card review units.[167][168] In a Twitter message, Hardware Unboxed said, "Nvidia have officially decided to ban us from receiving GeForce Founders Edition GPU review samples. Their reasoning is that we are focusing on rasterization instead of ray tracing. They have said they will revisit this 'should your editorial direction change.'"[169]
In emails that were disclosed by Walton from Nvidia Senior PR Manager Bryan Del Rizzo, Nvidia had said:
...your GPU reviews and recommendations have continued to focus singularly on rasterization performance, and you have largely discounted all of the other technologies we offer gamers. It is very clear from your community commentary that you do not see things the same way that we, gamers, and the rest of the industry do.[170]
TechSpot, partner site of Hardware Unboxed, said, "this and other related incidents raise serious questions around journalistic independence and what they are expecting of reviewers when they are sent products for an unbiased opinion."[170]
A number of prominent technology reviewers came out strongly against Nvidia's move.[171][172]Linus Sebastian, of Linus Tech Tips, titled the episode of his popular weekly WAN Show, "NVIDIA might ACTUALLY be EVIL..."[173] and was highly critical of the company's move to dictate specific outcomes of technology reviews.[174] The popular review site Gamers Nexus said it was, "Nvidia's latest decision to shoot both its feet: They've now made it so that any reviewers covering RT will become subject to scrutiny from untrusting viewers who will suspect subversion by the company. Shortsighted self-own from NVIDIA."[175]
Two days later, Nvidia reversed their stance.[176][177] Hardware Unboxed sent out a Twitter message, "I just received an email from Nvidia apologizing for the previous email & they've now walked everything back."[178][171] On December 14, Hardware Unboxed released a video explaining the controversy from their viewpoint.[179] Via Twitter, they also shared a second apology sent by Nvidia's Del Rizzo that said "to withhold samples because I didn't agree with your commentary is simply inexcusable and crossed the line."[180][181]
In 2018, Nvidia's chips became popular for cryptomining, the process of obtaining crypto rewards in exchange for verifying transactions on distributed ledgers, the U.S. Securities and Exchange Commission (SEC) said. However, the company failed to disclose that it was a "significant element" of its revenue growth from sales of chips designed for gaming, the SEC further added in a statement and charging order. Those omissions misled investors and analysts who were interested in understanding the impact of cryptomining on Nvidia's business, the SEC emphasized. Nvidia, which did not admit or deny the findings, has agreed to pay $5.5 million to settle civil charges, according to a statement made by the SEC in May 2022.[182]