Skip to main content
 
 
 
IN THIS SECTION
ex. data visualization, research paper
  • Estimating the Performance of Entity Resolution Algorithms: Lessons Learned Through PatentsView.org

    The U.S. Patents and Trademarks Office receives thousands of patent applications every year. Often, the same inventor will apply for multiple patents. Other times, multiple inventors with similar names will each apply for a patent.  

    The issue researchers and innovation enthusiasts have run into is that, when analyzing patent data, there is no standard way to tell whether an inventor named on multiple patents is the same person or different people with a similar name. 

    PatentsView uses algorithms to make that determination, a process known as entity resolution or disambiguation. The process is not perfect, and the PatentsView team is constantly working to make the algorithm more accurate.  

    The first step in any improvement process is to evaluate how well the current system works. Olivier Binette, a PhD candidate in Statistical Science at Duke University, explored this question in his publication Estimating the Performance of Entity Resolution Algorithms: Lessons Learned Through PatentsView.org.  

    Challenges for the PatentsView algorithm 

    Binette notes in his paper that the PatentsView entity resolution algorithm faces three main challenges in accurately determining whether the names on multiple patent applications belong to one or more than one inventor. 

    First, when researchers apply the PatentsView algorithm to benchmark datasets — smaller subsets of larger datasets that are used to train and test algorithms — the results tend to be more accurate then when the algorithm is applied to the larger, real-world data. This is likely because many of the false links between inventors with similar names do not appear in the benchmark dataset. 

    Second, the number of patents that share a common inventor is relatively small compared to the larger number of patents. This creates a challenge for training the PatentsView algorithm to classify pairs of records as either sharing an inventor or not sharing an inventor. 

    Finally, there are many different methods researchers have used to sample the benchmark data sets and adjust their estimates according to those samples. This creates an additional challenge in training the PatentsView algorithm. 

    Binette’s method 

    Binette argues that his method for estimating the performance of the PatentsView algorithm addresses all three challenges.  

    His method uses three different representations of precision and recall. Precision is the fraction of pairs that are put into the same group for analysis and recall is the fraction of pairs that are correctly identified. So, an algorithm with high precision would correctly identify two similar names and put them together for analysis most of the time. An algorithm with high recall would, most of the time, correctly identify which of those similar names belonged to the same inventor. 

    He tested each representation using PatentsView’s current disambiguated inventor data. For the test, he treated that data as the ground truth, then randomly added in errors before calculating precision and recall.  

    He repeated the process 100 times. Then, he performed additional tests on two existing benchmark datasets and a disambiguation set done by hand. 

    Using this method, Binette found that the PatentsView’s inventor disambiguation algorithm had a precision between 79%-91% and a recall between 91%-95%, which is much lower than the 100% found by previous testing on benchmark datasets. This shows that PatentsView’s current entity resolution algorithm over-estimates matching pairs.  

    Future uses 

    Binette’s evaluation method gives PatentsView a way to reliably analyze the effectiveness of changes made to the entity resolution algorithm in the future. Dive deeper into Binette’s method and review his code on his PatentsView Evaluation page on Github

  • Data-in-Action Spotlight: Can natural disasters affect innovation? Evidence from Hurricane Katrina

    As climate events and changes increase globally, how could this affect innovation and patenting of intellectual property? Luis Ballesteros of the Questrom School of Business at Boston University explored this question with his research on Hurricane Katrina published in late 2021.

    A different perspective

    While there are geographical studies of innovation and patents that focus on social features like how close a person is to human and material resources and institutions, Ballesteros is interested in a different perspective – what he calls, “exposure to large shocks.” Ballesteros used PatentsView’s disambiguated inventor and location data to write Can natural disasters affect innovation? Evidence from Hurricane Katrina. The publication describes the effects of natural disasters on patents and patenting.

    How Hurricane Katrina affected innovation

    Evidence suggests that large societal shocks produce lasting variations in human risk-aversion behaviors. Based on that evidence, Ballesteros proposes that Hurricane Katrina in the U.S. would have changed innovation outcomes.

    More specifically, Ballesteros and supporting literature suggest that after an immediate shock, affected counties have much more patenting activity and the quality of innovation increases compared to non-exposed counties. This correlation has been shown to persist for roughly 10 years after the initial shock.

    Methodology

    Ballesteros’ methods involved constructing a history of inventors between 1999 and 2015 that allowed him to follow the “Katrina effect” across geographies. The estimates he found imply that shock-affected people were not only more likely to patent, but became more skewed toward high-technology sectors.

    Ballesteros controlled for natural variation versus shock-related variance in several ways, which he illustrated in section four, Empirical Strategy, of the publication. In section three, Data, Ballesteros provides insights on the challenges and nuance of working with patent data, including the consideration for average processing time between application date and granting of a patent (which was reported as 23 months on average by USPTO in 2021) and how this relates to conducting longitudinal research with patent data.

    Read the full publication to learn more about Ballesteros’ methods and insights on working with patent and PatentsView data.

    How are you using PatentsView data?

    If you have used PatentsView data in your own research, organization, or classroom and would like to be highlighted in a Data-in-Action spotlight piece, please visit our service desk.

     

    Citation for Luis Ballesteros work: Ballesteros, Luis, Can natural disasters affect innovation? Evidence from Hurricane Katrina (December 13, 2021). Available at SSRN: https://ssrn.com/abstract=3980107 or http://dx.doi.org/10.2139/ssrn.3980107

  • What's New with PatentsView - March 2023

    March Updates

    This month, PatentsView released the third quarter of 2021 data complete with the new algorithm and data structure updates initiated last fall. The release notes web page holds detailed information on this release and historical releases.

    Also released this month are annualized gender data files with new documentation and an updated data dictionary from the Office of the Chief Economist (OCE) at the United States Patent and Trademark Office (USPTO). These datasets are designed for use in quick exploratory data analysis as well as read programmatically for more longitudinally focused data users. The annual files contain information from the assignee, inventor, location, application, and patent tables all in one place for a more comprehensive picture of patenting teams. In addition to pulling in variables from these separate PatentsView data tables, the datasets contain novel variables including the total number of inventors on a given patent, the total number of inventors listed on a given patent that were assigned a gender, the number of men inventors on each patent, the number of women inventors on each patent, and a flag for demonstrating whether inventor information is available for that patent.

    To read more about these data files and the inspiration for their generation, visit the Gender & Innovation page and navigate to the DATA section located under the interactive visualization of gender data from 2000 to 2020.

    Looking Ahead

    The next PatentsView data update is gearing up this March and will result in a double release of 2021 quarter four and 2022 quarter one data come early-summer. The team is working with OCE this spring to improve and optimize the assignee disambiguation and gender attribution algorithms. The anticipated result of this dive into algorithm repair and improvement is higher quality data. As always, please reach out to our team with data questions and suggestions. Your exploration of the data and reporting of discrepancies and errors helps support our team to return the highest quality data to the public.

    To receive regular updates on what the PatentsView team is working on in distributing patent data and reading about patenting literature, subscribe to our bi-monthly newsletter. Happy Spring!

     

  • Spotlight on Patricia Bath

    In 1986, Patricia Bath filed for a medical patent for a novel method to remove eye cataracts. Bath was the first African American female physician to acquire a patent. Her patent has been referenced  over 100 times since its filing and has been cited as recently as February 2022 by Gregg Scheller and Matthew N. Zeid in their steerable laser probe patent.  

    The patent data alone will not tell you that Bath was the first African American female physician to acquire a patent. The race of inventors, like gender, is not part of any data collected by the USPTO and would require attribution algorithms similar to the gender attribution currently conducted by the PatentsView team.  

    Dr. Patricia Bath

    Dig Deeper into Patent Data 

    PatentsView provides an opportunity to look at women in innovation more broadly. With PatentsView’s bulk downloads data, you can now query the data to see counts and types of inventions by male and female inventors in the aggregate.  

    Last Fall, PatentsView hosted a symposium on the attribution of demographic information to inventors listed on patents with the USPTO’s Office of the Chief Economist. This symposium included updates on predicting gender and race using artificial intelligence and machine learning approaches, as well as insights on economic implications of these predictions to innovation policy.  

    These methods show how researchers can dig deeper into the data to reveal trends and opportunity gaps for inventors and entrepreneurs.  

    Looking Toward the Future 

    The breadth of PatentsView’s mission has evolved as the project matures. Beginning in 2012 with an endeavor to connect and show the work of unique inventors over time and place, the PatentsView project has expanded the scope of its connection and discernment efforts to the assignees, locations, attorneys, and gender of inventors involved in patenting the country’s latest innovations.  

    With the pursuit of disambiguation algorithms becoming more advanced in what they can identify from publicly available information on inventors and their patents, there is a need to consider the methods and implications for this line of inquiry.  

    For gender attribution, the algorithm assigns the likelihood of the inventor being “male” based on the person’s name and their location in the world. The other options for the inventor are “not male,” aka female in this dichotomous view of gender, and “unassignable,” meaning that the algorithm was not able to confidently assign male or not-male to the inventor.  

    A similar method could be applied to the likelihood of an inventor being of a certain race, nationality, or ethnicity. There are a variety of algorithms available using numerous different methodologies and each has unique advantages and disadvantages in terms of accuracy, expense, and time.  

    What do you think about the future of race attribution in innovation? Tell us in the forum. 

  • American Institutes for Research examines innovation in renewable energy patent study

    Social science research starts with a commitment to using our time and resources in addressing problems most affecting society and the human experience. One urgent and globally important area of social science research is renewable energy, specifically, understanding the rate of innovation and adoption in the sector.

    At the American Institutes for Research (AIR), we have a team of data scientists that work on transforming, disambiguating, normalizing, and quality assuring all data on the patents granted in the United States. This unleashes opportunity for us to use the data in research and analysis. We chose to develop classification models that predict which patents are related to renewable energy and present our findings at the Conference for Women in Data Science and Statistics this year on 10/8/2022 in St. Louis, MI. We used the Cooperative Patent Classification (CPC) labeling system to find renewable energy patents and look at the most common words used in patent abstracts and titles for this type of patent, demonstrated in the word cloud below.

    Figure 1. Word Cloud of most popular stems of words  found in patent titles and abstracts.
    Figure 1. Word Cloud of most popular stems of words  found in patent titles and abstracts.

     

    We built random forest, logistic regression, and naïve bayes machine-learning classification models on the granted Renewable Energy (RE) patents (CPC subclasses under Y02) to predict whether a given patent was RE-related or not. Our efforts focused on searching for the model construction method and parameter choices that optimized the F1-score for Class 1 (predicted as RE-related). Our best-performing model was a random forest classifier and a CountVectorizer (a program to break down sentences into countable parts) on patent abstracts to achieve an F1-score of almost .85 as shown in figure 2.

    Figure 2. Results of random forest classifier and CountVectorizer methods on RE identification in patent abstracts
    Figure 2. Results of random forest classifier and CountVectorizer methods on RE identification in patent abstracts

     

    Figure 3 is the confusion matrix for the model described in Fig 2. This matrix shows where correct/incorrect predictions occur. For example, 21,622 patents were predicted to be RE but were not given the Y02 CPC classification.

    Figure 3. Confusion matrix for the random forest classification algorithm
    Figure 3. Confusion matrix for the random forest classification algorithm

     

    Enabled by the PatentsView project developed at AIR under the supervision of the Office of the Chief Economist at the USPTO, patent data usage is paramount to holding the federal government accountable for investing and encouraging innovation in science and technology in the areas important to scientists and the public. While challenges to increased domestic and international adoption of solar, wind, and other innovations are interwoven and interdisciplinary, the rate of innovation in renewable energy sector is an important component to analyze and understand as we push to transition away from fossil-fueled power.

     

Button sidebar