Dataset Viewer
Auto-converted to Parquet Duplicate
Abstracts
stringlengths
379
1.97k
Class
stringclasses
21 values
Sign language is one of the oldest and most natural forms of language for communication, but since most people do not know sign language and interpreters are very difficult to come by, we have come up with a real-time method using neural networks for fingerspelling-based Indian Sign Language. We collected a dataset of ...
Sign Language and Fingerspelling Recognition
Sign language recognition is one of the most challenging tasks of today__ era. Most of the researchers working in this domain have focused on different types of implementations for sign recognition. These implementations require the development of smart prototypes for capturing and classifying sign gestures. Keeping in...
Sign Language and Fingerspelling Recognition
Sign language users tend to be socially restricted due to the general population__ lack of knowledge of sign language. Some attempts have been made to develop technologies that improve this aspect by translating sign language. However; these approaches generally use a third-person camera for collecting the information,...
Sign Language and Fingerspelling Recognition
Although not a global language, sign language is an essential tool for the deaf community. Communication between these communities and hearing population is severely hampered by this, as human-based interpretation can be both costly and time-consuming. In this paper, we present a real-time American Sign Language (ASL) ...
Sign Language and Fingerspelling Recognition
Sign Language Recognition (SLR) is a Computer Vision (CV) and Machine Learning (ML) task, with potential applications that would be beneficial to the Deaf community, which includes not only deaf persons but also hearing people who use Sign Languages. SLR is particularly challenging due to the lack of training datasets ...
Sign Language and Fingerspelling Recognition
Sign language is a method of communication using hand gestures that are usually used by Deaf people. In Indonesia, there are 2 types of sign language, namely SIBI and BISINDO. However, in everyday life, BISINDO is more often used. Communication gaps often occur between Deaf people and hearing people. So that we need me...
Sign Language and Fingerspelling Recognition
The goal of sign language technologies is to develop a bridging solution for the communication gap between the hearing-impaired community and the rest of society. Real-time Sign Language Recognition (SLR) is a state-of-the-art subject that promises to facilitate communication between the hearing-impaired community and ...
Sign Language and Fingerspelling Recognition
The recent development of disability studies in academic bodies has expedited the promotion of investigation on disability. With computer-aided tools, communication between the impaired person and someone who does not understand sign language could be accessible. A large number of people across the world are using sign...
Sign Language and Fingerspelling Recognition
Sign language is used by deaf and hard hearing people to exchange information between their own community and with other people. Fingerspelling recognition method from isolate sign language has attracted research interest in computer vision and human-computer interaction based on a novel technique. The essential for re...
Sign Language and Fingerspelling Recognition
Sign Language Recognition(SLR) is a complex gesture recognition problem because of the quick and highly coarticulated motion involved in gestures. This research work focuses on Fingerspelling recognition task, which constitutes 35% of the American Sign Language (ASL). Fingerspelling identifies the word letter by letter...
Sign Language and Fingerspelling Recognition
Natural Language Processing (NLP) is a vital field of artificial intelligence that automates the study of human language. However for Malay manuscripts (MM) written in old jawi, its exposure on such field is limited. Besides, most of the studies related to MM studies and NLP were focused on rule based or rule based mac...
Rule-based MT (RBMT)
This paper presents a comparison of post-editing (PE) changes performed on English-to-Finnish neural (NMT), rule-based (RBMT) and statistical machine translation (SMT) output, combining a product-based and a process-based approach. A total of 33 translation students acted as participants in a PE experiment providing bo...
Rule-based MT (RBMT)
Machine translation has witnessed great development in the recent decades and we have entered the era of neural machine translation (NMT). A review of MT is necessary for a better understanding of the relationship between MT and human translators and translation teaching in this era when MT has flourished. This paper f...
Rule-based MT (RBMT)
This article re-looks into machine translation (MT) errors and proposes a function-oriented MT post-editing (MTPE) typology in a new technological context. Driven by the technological advances of the neural machine translation (NMT) system over the past several years, the author thinks that we should re-examine MT erro...
Rule-based MT (RBMT)
To build an Indonesian Machine Translation (MT), it is not only needed a related syntactic analysis to the correct spelling of words but also needed related contextual analysis, consist type and function of word, morphology, and semantic. The dictionaries usage is needed to translates Indonesian basic words and to capt...
Rule-based MT (RBMT)
In this paper we describe a rule-based, bi-directional machine translation system for the Finnish—English language pair. The baseline system was based on the existing data of FinnWordNet, omorfi and apertium-eng. We have built the disambiguation, lexical selection and translation rules by hand. The dictionaries and rul...
Rule-based MT (RBMT)
Corpus-based approaches to machine translation (MT) have difficulties when the amount of parallel corpora to use for training is scarce, especially if the languages involved in the translation are highly inflected. This problem can be addressed from different perspectives, including data augmentation, transfer learning...
Rule-based MT (RBMT)
Machine translation is to translate one language into another language, which has undergone a great evolution. The model of machine translation has been continuously improved, aiming to make the translation effect closer to the artificial translation. This article briefly summarizes the development history of machine t...
Rule-based MT (RBMT)
This article aimed to address the problems of word order confusion, context dependency, and ambiguity in traditional machine translation (MT) methods for verb recognition. By applying advanced intelligent algorithms of artificial intelligence, verb recognition can be better processed and the quality and accuracy of MT ...
Rule-based MT (RBMT)
Machine translation (MT) systems translate text between different languages by automatically learning in-depth knowledge of bilingual lexicons, grammar and semantics from the training examples. Although neural machine translation (NMT) has led the field of MT, we have a poor understanding on how and why it works. In th...
Rule-based MT (RBMT)
The landscape of transformer model inference is increasingly diverse in model size, model characteristics, latency and throughput requirements, hardware requirements, etc. With such diversity, designing a versatile inference system is challenging. DeepSpeed-Inference addresses these challenges by (1) a multi-GPU infere...
Transformer Models
The transformer is the most critical algorithm innovation of the Nature Language Processing (NLP) field in recent years. Unlike the Recurrent Neural Network (RNN) models, transformers are able to process on dimensions of sequence lengths in parallel, therefore leads to better accuracy on long sequences. However, effici...
Transformer Models
Transformer is the state-of-the-art model in recent machine translation evaluations. Two strands of research are promising to improve models of this kind: the first uses wide networks (a.k.a. Transformer-Big) and has been the de facto standard for development of the Transformer system, and the other uses deeper languag...
Transformer Models
Transformer architectures are highly expressive because they use self-attention mechanisms to encode long-range dependencies in the input sequences. In this paper, we present a literature review on Transformer-based (TB) models, providing a detailed overview of each model in comparison to the Transformer’s standard arc...
Transformer Models
The question answering system is frequently applied in the area of natural language processing (NLP) because of the wide variety of applications. It consists of answering questions using natural language. The problem is, in general, solved by employing a dataset that consists of an input text, a query, and the text seg...
Transformer Models
Transformer-based sequence-to-sequence architectures, while achieving state-of-the-art results on a large number of NLP tasks, can still suffer from overfitting during training. In practice, this is usually countered either by applying regularization methods (e.g. dropout, L2-regularization) or by providing huge amount...
Transformer Models
Transformer-based models are the state-of-the-art for Natural Language Understanding (NLU) applications. Models are getting bigger and better on various tasks. However, Transformer models remain computationally challenging since they are not efficient at inference-time compared to traditional approaches. In this paper,...
Transformer Models
Transformer-based deep NLP models are trained using hundreds of millions of parameters, limiting their applicability in computationally constrained environments. In this paper, we study the cause of these limitations by defining a notion of Redundancy, which we categorize into two classes: General Redundancy and Task-s...
Transformer Models
In this paper, we present a new approach to time series forecasting. Time series data are prevalent in many scientific and engineering disciplines. Time series forecasting is a crucial task in modeling time series data, and is an important area of machine learning. In this work we developed a novel method that employs ...
Transformer Models
We introduce DropHead, a structured dropout method specifically designed for regularizing the multi-head attention mechanism which is a key component of transformer. In contrast to the conventional dropout mechanism which randomly drops units or connections, DropHead drops entire attention heads during training to prev...
Transformer Models
Generalization is a key element behind a strong performing neural network: models that generalize perform well even with novel inputs. We investigated a specific form of generalization known as systematic compositionality, the algebraic capacity to understand and produce a potentially infinite number of novel combinati...
Recurrent Neural Networks (RNNs)
Recurrent neural networks (RNNs) have demonstrated very impressive performances in learning sequential data, such as in language translation and music generation. Here, we show that the intrinsic computational aspect of RNNs is very similar to that of classical stress update algorithms in modeling history-dependent mat...
Recurrent Neural Networks (RNNs)
Recurrent neural networks (RNNs) have been widely adopted in research areas concerned with sequential data, such as text, audio, and video. However, RNNs consisting of sigma cells or tanh cells are unable to learn the relevant information of input data when the input gap is large. By introducing gate functions into the...
Recurrent Neural Networks (RNNs)
Recurrent neural networks (RNNs) are widely used throughout neuroscience as models of local neural activity. Many properties of single RNNs are well characterized theoretically, but experimental neuroscience has moved in the direction of studying multiple interacting areas, and RNN theory needs to be likewise extended....
Recurrent Neural Networks (RNNs)
Back-propagation through time (BPTT) has been widely used for training Recurrent Neural Networks (RNNs). BPTT updates RNN parameters on an instance by back-propagating the error in time over the entire sequence length, and as a result, leads to poor trainability due to the well-known gradient explosion/decay phenomena....
Recurrent Neural Networks (RNNs)
This paper addresses the synchronization of multiple fractional-order recurrent neural networks (RNNs) with time-varying delays under event-triggered communications. Based on the assumption of the existence of strong connectivity or a spanning tree in the communication digraph, two sets of sufficient conditions are der...
Recurrent Neural Networks (RNNs)
In this paper, we address the Clifford-valued distributed optimization subject to linear equality and inequality constraints. The objective function of the optimization problems is composed of the sum of convex functions defined in the Clifford domain. Based on the generalized Clifford gradient, a system of multiple Cl...
Recurrent Neural Networks (RNNs)
Variants of deep networks have been widely used for hyperspectral image (HSI)-classification tasks. Among them, in recent years, recurrent neural networks (RNNs) have attracted considerable attention in the remote sensing community. However, complex geometries cannot be learned easily by the traditional recurrent units...
Recurrent Neural Networks (RNNs)
This paper presents a sentiment analysis solution on tweets using Recurrent Neural Networks (RNNs). The method is can classifying tweets with an 80.74% accuracy rate, considering a binary task, after experimenting with 20 different design approaches. The solution integrates an attention mechanism aiming to enhance the ...
Recurrent Neural Networks (RNNs)
State-of-the-art solutions in the areas of "Language Modelling & Generating Text", "Speech Recognition", "Generating Image Descriptions" or "Video Tagging" have been using Recurrent Neural Networks as the foundation for their approaches. Understanding the underlying concepts is therefore of tremendous importance if we ...
Recurrent Neural Networks (RNNs)
Large Language Models (LLMs) have emerged as a groundbreaking technology with their unparalleled text generation capabilities across various applications. Nevertheless, concerns persist regarding the accuracy and appropriateness of their generated content. A contemporary methodology, self-correction, has been proposed ...
Large Language Models (LLMs)
Large Language Models (LLMs) are a type of artificial intelligence that has been revolutionizing various fields, including biomedicine. They have the capability to process and analyze large amounts of data, understand natural language, and generate new content, making them highly desirable in many biomedical applicatio...
Large Language Models (LLMs)
Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we pr...
Large Language Models (LLMs)
Large language models (LLMs) can perform complex reasoning by generating intermediate reasoning steps. Providing these steps for prompting demonstrations is called chain-of-thought (CoT) prompting. CoT prompting has two major paradigms. One leverages a simple prompt like"Let's think step by step"to facilitate step-by-s...
Large Language Models (LLMs)
Since the recent prosperity of Large Language Models (LLMs), there have been interleaved discussions regarding how to reduce hallucinations from LLM responses, how to increase the factuality of LLMs, and whether Knowledge Graphs (KGs), which store the world knowledge in a symbolic form, will be replaced with LLMs. In t...
Large Language Models (LLMs)
Generative Large Language Models (LLMs) such as GPT-3 are capable of generating highly fluent responses to a wide variety of user prompts. However, LLMs are known to hallucinate facts and make non-factual statements which can undermine trust in their output. Existing fact-checking approaches either require access to th...
Large Language Models (LLMs)
Large Language Models (LLMs) have demonstrated remarkable zero-shot generalization across various language-related tasks, including search engines. However, existing work utilizes the generative ability of LLMs for Information Retrieval (IR) rather than direct passage ranking. The discrepancy between the pre-training o...
Large Language Models (LLMs)
Code Large Language Models (Code LLMs), such as StarCoder, have demonstrated exceptional performance in code-related tasks. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. In this paper, we introduce WizardCoder, which empowers Code LLMs with complex inst...
Large Language Models (LLMs)
The performance of large language models (LLMs) on existing reasoning benchmarks has significantly improved over the past years. In response, we present JEEBench, a considerably more challenging benchmark dataset for evaluating the problem solving abilities of LLMs. We curate 515 challenging pre-engineering mathematics...
Large Language Models (LLMs)
Large language models (LLMs) have emerged as a widely-used tool for information seeking, but their generated outputs are prone to hallucination. In this work, our aim is to allow LLMs to generate text with citations, improving their factual correctness and verifiability. Existing work mainly relies on commercial search...
Large Language Models (LLMs)
Bilingual Lexicon Induction (BLI) aims at inducing word translations in two distinct languages. The generated bilingual dictionaries via BLI are essential for cross-lingual NLP applications. Most existing methods assume that a mapping matrix can be learned to project the embedding of a word in the source language to th...
Bilingual Lexicon Induction (BLI)
Much recent work in bilingual lexicon induction (BLI) views word embeddings as vectors in Euclidean space. As such, BLI is typically solved by finding a linear transformation that maps embeddings to a common space. Alternatively, word embeddings may be understood as nodes in a weighted graph. This framing allows us to ...
Bilingual Lexicon Induction (BLI)
Most Bilingual Lexicon Induction (BLI) methods retrieve word translation pairs by finding the closest target word for a given source word based on cross-lingual word embeddings (WEs). However, we find that solely retrieving translation from the source-to-target perspective leads to some false positive translation pairs...
Bilingual Lexicon Induction (BLI)
Bilingual word lexicons map words in one language to their synonyms in another language. Numerous papers have explored bilingual lexicon induction (BLI) in high-resource scenarios, framing a typical pipeline that consists of two steps: (i) unsupervised bitext mining and (ii) unsupervised word alignment. At the core of ...
Bilingual Lexicon Induction (BLI)
Bilingual Lexicon Induction (BLI) is a core task in multilingual NLP that still, to a large extent, relies on calculating cross-lingual word representations. Inspired by the global paradigm shift in NLP towards Large Language Models (LLMs), we examine the potential of the latest generation of LLMs for the development o...
Bilingual Lexicon Induction (BLI)
The word embedding models such as Word2vec and FastText simultaneously learn dual representations of input vectors and output vectors. In contrast, almost all existing unsupervised bilingual lexicon induction (UBLI) methods use only input vectors without utilizing output vectors. In this article, we propose a novel app...
Bilingual Lexicon Induction (BLI)
Contextualized word embeddings have emerged as the most important tool for performing NLP tasks in a large variety of languages. In order to improve the cross- lingual representation and transfer learning quality, contextualized embedding alignment techniques, such as mapping and model fine-tuning, are employed. Existi...
Bilingual Lexicon Induction (BLI)
Bilingual Lexicon Induction (BLI) aims to map words in one language to their translations in another, and is typically through learning linear projections to align monolingual word representation spaces. Two classes of word representations have been explored for BLI: static word embeddings and contextual representation...
Bilingual Lexicon Induction (BLI)
Bilingual Lexicon Induction (BLI), where words are translated between two languages, is an important NLP task. While noticeable progress on BLI in rich resource languages using static word embeddings has been achieved. The word translation performance can be further improved by incorporating information from contextual...
Bilingual Lexicon Induction (BLI)
Bilingual lexicon induction (BLI) with limited bilingual supervision is a crucial yet challenging task in multilingual NLP. Current state-of-the-art BLI methods rely on the induction of cross-lingual word embeddings (CLWEs) to capture cross-lingual word similarities; such CLWEs are obtained 1) via traditional static mo...
Bilingual Lexicon Induction (BLI)
Abstract In the article, we describe recent trends in the detection of hate speech and offensive language on social media. We accord from the latest studies and scientific contributions. The article describes current trends and the most used methods in connection with the detection of hate speech and offensive language...
Hate and Offensive Speech Detection
Preprocessing is a crucial step for each task related to text classification. Preprocessing can have a significant impact on classification performance, but at present there are few large-scale studies evaluating the effectiveness of preprocessing techniques and their combinations. In this work, we explore the impact o...
Hate and Offensive Speech Detection
Offensive language and Hate Speech are rampant on social media platforms (Facebook, Twitter, etc.) in Egypt for quite a while now, appearing in Tweets, Facebook posts and comments, etc., It is an increasingly outreaching problem that needs immediate attention. This paper focuses on the problem of detecting and classify...
Hate and Offensive Speech Detection
The easily accessibility of different online platform allows every individuals people to express their ideas and share experiences easily without any restriction because of freedom of speech. Since social media don't have general framework to identify hate and neutral speech this results anonymity. However, the propaga...
Hate and Offensive Speech Detection
On social media networks like Twitter, Facebook, and Tumblr, people frequently share information. However, these platforms are also notorious for the spread of hate speech and insults, often posted anonymously. Hate speech involves using violent, abusive, or aggressive language towards a particular group based on facto...
Hate and Offensive Speech Detection
The user-generated content on the internet includ- ing that on social media may contain offensive language and hate speech which negatively affect the mental health of the whole internet society and may lead to hate crimes. Intelligent models for automatic detection of offensive language and hate speech have attracted ...
Hate and Offensive Speech Detection
The prevalence of social media platforms prompted detecting any language that is intended to harm or intimidate another person or group of people in online posts and comments. On Twitter, for instance, users are susceptible to cyberbullying and hate speech, which may develop into physical and psychological violence. A ...
Hate and Offensive Speech Detection
Social media often serves as a breeding ground for various hateful and offensive content. Identifying such content on social media is crucial due to its impact on the race, gender, or religion in an unprejudiced society. However, while there is extensive research in hate speech detection in English, there is a gap in h...
Hate and Offensive Speech Detection
With online social platforms becoming more and more accessible to the common masses, the volume of public utterances on a range of issues, events, and persons etc. has increased profoundly. Though most of the content is a manifestation of personal feelings of the individuals, yet a lot of this content often comprises o...
Hate and Offensive Speech Detection
Internet and social media usage has skyrocketed over the past two decades, changing how people communicate with one another on a basic level. Numerous favourable results have resulted from this. The risks and harms that come with it are also there. It is impossible for humans to control the amount of damaging content, ...
Hate and Offensive Speech Detection
Because of the rapid advancement of technology over the last several years, the number of internet users is growing at an exponential rate, and as a result, email communication has become popular as a means of exchanging information over the internet. Sending data and communicating with peers via email is the most cost...
Email Spam and Phishing Detection
Email Spam has become a vital issue currently, with high-speed growth of internet users. Some people are using them for illegal conducts, phishing and fraud. Sending malicious link through spam emails which can harm our system and may also they will seek into our system. The need of email spam detection is to prevent s...
Email Spam and Phishing Detection
Email spam has become a prevalent issue in recent times, with the growing number of internet users, spam emails are also on the rise. Many individuals use them for illegal and unethical activities such as phishing and fraud. Spammers send dangerous links through spam emails, which can harm our systems and gain access t...
Email Spam and Phishing Detection
Anything that is connected to the internet is vulnerable, for example mobile phones, personal laptops, tablets, routers, and smart speakers. Cybercriminals need one point of weakness like unprotected devices or a weak password or any attachment to potentially enter into the system. There is a need to pause before proce...
Email Spam and Phishing Detection
With the influx of technological advancements and the increased simplicity in communication, especially through emails, the upsurge in the volume of unsolicited bulk emails (UBEs) has become a severe threat to global security and economy. Spam emails not only waste users’ time, but also consume a lot of network bandwid...
Email Spam and Phishing Detection
Phishing emails pose a severe risk to online users, necessitating effective identification methods to safeguard digital communication. Detection techniques are continuously researched to address the evolution of phishing strategies. Machine learning (ML) is a powerful tool for automated phishing email detection, but ex...
Email Spam and Phishing Detection
Spam is the act of sending unsolicited emails to a large number of users for phishing, spreading malware, etc. Internet Service Providers (ISPs) and email inbox providers (like Gmail, Yahoo Mail, AOL, etc.) rely on SPAM filters, firewalls, and blacklist directories to prevent "unsolicited" SPAM emails from entering you...
Email Spam and Phishing Detection
The risk of cyberattacks against businesses has risen considerably, with Business Email Compromise (BEC) schemes taking the lead as one of the most common phishing attack methods. The daily evolution of this assault mechanism’s attack methods has shown a very high level of proficiency against organisations. Since the m...
Email Spam and Phishing Detection
Breakthroughs in technology are happening as we speak, but the threat of their misuse is also increasing. Even a tiny amount of exposure within an organization can potentially force the organization out of business. In a digital world, information is the greatest asset. A phishing attack is an attack on the critical in...
Email Spam and Phishing Detection
The proliferation of phishing sites and emails poses significant challenges to existing cybersecurity efforts. Despite advances in spam filters and email security protocols, problems with oversight and false positives persist. Users often struggle to understand why emails are flagged as spam, risking the possibility of...
Email Spam and Phishing Detection
Fake news production, accessibility, and consumption have all increased with the rise of internet-connected gadgets and social media platforms. A good fake news detection system is essential because the news readers receive can affect their opinions. Several works on fake news detection have been done using machine lea...
Fake News Detection
The strategy for identifying fake news incorporates in blending of Natural Language Processing (NLP) techniques, Reinforcement Learning (RL) and block chain technology. Identifying false information on Twitter is essential because of the platform's broad appeal and significant impact on public conversation. For million...
Fake News Detection
The paper presents our solutions for the MediaEval 2020 task namely FakeNews: Corona Virus and 5G Conspiracy Multimedia Twitter-Data-Based Analysis. The task aims to analyze tweets related to COVID-19 and 5G conspiracy theories to detect misinformation spreaders. The task is composed of two sub-tasks namely (i) text-ba...
Fake News Detection
In today's digital age, the swift spreading of information has revolutionized the way for news consumers and makes them informed. However, this convenience comes with a downside – the propagation of fake news, which can spread misinformation, manipulate public opinions, and undermine the credibility of legitimate sourc...
Fake News Detection
Given the ubiquity of fake news online, a reliable mechanism for automated detection is needed. This project proposes a new end-to-end detection pipeline, which uses Natural Language Processing (NLP) techniques for automated evidence extraction from online sources given an input claim of arbitrary length. This project ...
Fake News Detection
With the widespread use of social media platforms within our modern society, these platforms have become a popular medium for disseminating news across the globe. While some of these platforms are considered reliable sources for sharing news, others publicize the information without much validation. The transmission of...
Fake News Detection
With the development of technology, the spread of fake news on social networks is increasing. Many researchers and organizations have taken action to detect fake news manually or automatically. In this study, various Machine Learning Algorithms and Transformer based approaches are used to select the best performing mod...
Fake News Detection
Currently, Fake News easily go viral on social networks, this is a cause for concern worldwide. An alternative to detect this type of information is the use of Machine Learning and Natural Language Processing. Nevertheless, due to the high volume of information it is crucial to define mechanism easy to implement and to...
Fake News Detection
The upsurge of fake news in recent times, facilitated due to the swift dissemination of information on social media, has necessitated the development of advanced detection techniques. This research focuses on optimizing the Hugging Face Transformer models – a cutting-edge Natural Language Processing (NLP) tool – to enh...
Fake News Detection
Fake news has been a problem ever since the internet boomed. The easier access and exponential growth of the knowledge offered on social media networks have created it knotty to largely differentiate between false and true information. Opposing such fake news is important because the world's view and mindset are shaped...
Fake News Detection
Recently, product/ service reviews and online businesses have been similar to the blood–heart relationship as they greatly impact customers’ purchase decisions. There is an increasing incentive to manipulate reviews, mostly profit-motivated, as positive reviews imply high purchases and vice versa. Therefore, a suitable...
Fake Review Detection
Now-a-days, use of apps has increased with the increasing craze towards mobiles. For all types of mobile application, users are preferring smartphones. Generally, depending on how many users already have downloaded that application? , what are the ratings and reviews? , what are the comments? , etc., users download mob...
Fake Review Detection
Online reviews are a growing market, but it is struggling with fake reviews. They undermine both the value of reviews to the user, and their trust in the review sites. However, fake positive reviews can boost a business, and so a small industry producing fake reviews has developed. The two sides are facing an arms race...
Fake Review Detection
Online shopping stores have grown steadily over the past few years. Due to the massive growth of these businesses, the detection of fake reviews has attracted attention. Fake reviews are seriously trying to mislead customers and thereby undermine the honesty and authenticity of online shopping environments. So far, var...
Fake Review Detection
In this COVID-19 scenario the majority have an interest in on-line searching. So, many folks order the merchandise depends on the previous reviews. These reviews square measure enjoying necessary role in creating purchase choices. however, in these reviews' spammers might manufacture pretend reviews because of such beh...
Fake Review Detection
In order to enhance brand benefits or discredit competitors, some merchants hire fake reviewers to post large amounts of fake reviews on e-commerce platforms. This behavior inevitably harms consumers’ interests and causes unfair market competition for other merchants. Researches on fake review detection mainly focus on...
Fake Review Detection
Detecting fake reviews can help customers make better purchasing decisions and maintain a positive online business environment. In recent years, pre-trained language models have significantly improved the performance of natural language processing tasks. These models are able to generate different representation vector...
Fake Review Detection
Fake (deceptive) reviews have become a serious problem for online consumers, with the proliferation of online marketplaces leading to an increase in spurious reviews that are often used to lure or discourage potential customers. While sentiment analysis has been introduced to the e-commerce sector, the lack of an effec...
Fake Review Detection
The increasing prevalence of fake online reviews jeopardizes firms' profits, consumers' well-being, and the trustworthiness of e-commerce ecosystems. We face the significant challenge of accurately detecting fake reviews. In this paper, we undertake a comprehensive investigation of traditional and state-of-the-art mach...
Fake Review Detection
Fighting fake news is a difficult and challenging task. With an increasing impact on the social and political environment, fake news exert an unprecedently dramatic influence on people’s lives. In response to this phenomenon, initiatives addressing automated fake news detection have gained popularity, generating widesp...
Fake Review Detection
End of preview. Expand in Data Studio

This benchmark is from the SciPrompt paper: https://huggingface.co/papers/2410.01946

Emerging NLP encompasses 21 newly developed research fields within the broader category of Computation and Language. We collect 30 examples for each topic, assigning five instances for training and another five for validation. The rest of the examples are used for testing. In total, we collect 210 for training and 420 for the test sets.

For detailed information regarding the dataset or SciPrompt framework, please refer to our Github repo and the EMNLP paper.

Citation Information

For the use of SciPrompt and Emerging NLP benchmark, please cite:


@inproceedings{you-etal-2024-sciprompt,
    title = "{S}ci{P}rompt: Knowledge-augmented Prompting for Fine-grained Categorization of Scientific Topics",
    author = "You, Zhiwen  and
      Han, Kanyao  and
      Zhu, Haotian  and
      Ludaescher, Bertram  and
      Diesner, Jana",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-main.350",
    pages = "6087--6104",
}

Contact Information

If you have any questions, please email zhiweny2@illinois.edu.

Downloads last month
63

Paper for uzw/Emerging_NLP