1. Homepage
  2. Programming
  3. Word Representation in Biomedical Domain

Word Representation in Biomedical Domain

Engage in a Conversation
Word Representation in Biomedical DomainPythonNatural language自然语言处理TokenizationNLTKByte-Pair EncodingN-gram

Word Representation in Biomedical Domain

Before you start, please make sure you have read this notebook. You are encouraged to follow the recommendations but you are also free to develop your own solution from scratch. CourseNana.COM

Marking Scheme

  • Biomedical imaging project: 40%
    • 20%: accuracy of the final model on the test set
    • 20%: rationale of model design and final report
  • Natural language processing project: 40%
    • 30%: completeness of the project
    • 10%: final report
  • Presentation skills and team work: 20%

This project forms 40\% of the total score for summer/winter school. The marking scheme of each part of this project is provided below with a cap of 100\%. CourseNana.COM

You are allowed to use open source libraries as long as the libraries are properly cited in the code and final report. The usage of third-party code without proper reference will be treated as plagiarism, which will not be tolerated. CourseNana.COM

You are encouraged to develop the algorithms by yourselves (without using third-party code as much as possible). We will factor such effort into the marking process. CourseNana.COM

Setup and Prerequisites

Recommended environment CourseNana.COM

  • Python 3.7 or newer
  • Free disk space: 100GB

Download the data CourseNana.COM

# navigate to the data folder
cd data

# download the data file
# which is also available at https://www.semanticscholar.org/cord19/download
wget https://ai2-semanticscholar-cord-19.s3-us-west-2.amazonaws.com/2021-07-26/document_parses.tar.gz

# decompress the file which may take several minutes
tar -xf document_parses.tar.gz

# which creates a folder named document_parses

Part 1 (20%): Parse the Data

The JSON files are located in two sub-folders in document_parses. You will need to scan all JSON files and extract text (i.e. string) from relevant fields (e.g. body text, abstract, titles). CourseNana.COM

You are encouraged to extract full article text from body text if possible. If the hardware resource is limited, you can extract from abstract or titles as alternatives. CourseNana.COM

Note: The number of JSON files is around 425k so it may take more than 10 minutes to parse all documents. CourseNana.COM

For more information about the dataset: https://www.semanticscholar.org/cord19/download CourseNana.COM

Recommended output: CourseNana.COM

  • A list of text (string) extracted from JSON files.
In [1]:
###################
# TODO: add your solution

###################

Part 2 (30%): Tokenization

Traverse the extracted text and segment the text into words (or tokens). CourseNana.COM

The following tracks can be developed in independentely. You are encouraged to divide the workload to each team member. CourseNana.COM

Recommended output: CourseNana.COM

  • Tokenizer(s) that is able to tokenize any input text.

Note: Because of the computation complexity of tokenizers, it may take hours/days to process all documents. Which tokenizer is more efficient? Any idea to speedup? CourseNana.COM

Track 2.1 (10%): Use split()

Use the standard split() by Python. CourseNana.COM

Track 2.2 (10%): Use NLTK or SciSpaCy

NLTK tokenizer: https://www.nltk.org/api/nltk.tokenize.html CourseNana.COM

SciSpaCy: https://github.com/allenai/scispacy CourseNana.COM

Note: You may need to install NLTK and SpaCy so please refer to their websites for installation instructions. CourseNana.COM

Track 2.3 (10%): Use Byte-Pair Encoding (BPE)

Byte-Pair Encoding (BPE): https://huggingface.co/transformers/tokenizer_summary.html CourseNana.COM

Note: You may need to install Huggingface's transformers so please refer to its website for installation instructions. CourseNana.COM

Track 2.4 (Bonus +5%): Build new Byte-Pair Encoding (BPE)

This track may be dependent on track 2.3. CourseNana.COM

The above pre-built tokenization methods may not be suitable for biomedical domain as the words/tokens (e.g. diseases, sympotoms, chemicals, medications, phenotypes, genotypes etc.) can be very different from the words/tokens commonly used in daily life. Can you build and train a new BPE model for biomedical domain in particular? CourseNana.COM

Open Question (Optional):

  • What are the pros and cons of the above tokenizers?
In [2]:
###################
# TODO: add your solution

###################

Part 3 (30%): Build Word Representations

Build word representations for each extracted word. If the hardware resource is limited, you may limit the vocabulary size up to 10k words/tokens (or even smaller) and the dimension of representations up to 256. CourseNana.COM

The following tracks can be developed independently. You are encouraged to divide the workload to each team member. CourseNana.COM

Track 3.1 (15%): Use N-gram Language Modeling

N-gram Language Modeling is to predict a target word by using n words from previous context. Specifically, CourseNana.COM

P(wi|wi1,wi2,...,win+1) CourseNana.COM

For example, given a sentence, "the main symptoms of COVID-19 are fever and cough", if n=7, we use previous context ["the", "main", "symptoms", "of", "COVID-19", "are"] to predict the next word "fever". CourseNana.COM

More to read: https://web.stanford.edu/~jurafsky/slp3/3.pdf CourseNana.COM

Recommended outputs: CourseNana.COM

  • A fixed vector for each word/token.

Track 3.2 (15%): Use Skip-gram with Negative Sampling

In skip-gram, we use a central word to predict its context. Specifically, CourseNana.COM

P(wcm,...wc1,wc+1,...,wc+m|wc) CourseNana.COM

As the learning objective of skip-gram is computational inefficient (summation of entire vocabulary |V|), negative sampling is commonly applied to accelerate the training. CourseNana.COM

In negative sampling, we randomly select one word from the context as a positive sample, and randomly select K words from the vocabulary as negative samples. As a result, the learning objective is updated to CourseNana.COM

L=logσ(uTtvc)Kk=1logσ(uTkvc), where ut is the vector embedding of positive sample from context, uk are the vector embeddings of negative samples, vc is the vector embedding of the central word, σ refers to the sigmoid function. CourseNana.COM

More to read http://web.stanford.edu/class/cs224n/readings/cs224n-2019-notes01-wordvecs1.pdf section 4.3 and 4.4 CourseNana.COM

Recommended outputs: CourseNana.COM

  • A fixed vector for each word/token.

Track 3.3 (Bonus +5%): Use Contextualised Word Representation by Masked Language Model (MLM)

BERT introduces a new language model for pre-training named Masked Language Model (MLM). The advantage of MLM is that the word representations by MLM will be contextualised. CourseNana.COM

For example, "stick" may have different meanings in different context. By N-gram language modeling and word2vec (skip-gram, CBOW), the word representation of "stick" is fixed regardless of its context. However, MLM will learn the representation of "stick" dynamatically based on context. In other words, "stick" will have different representations in different context by MLM. CourseNana.COM

More to read: http://jalammar.github.io/illustrated-bert/ and https://arxiv.org/pdf/1810.04805.pdf CourseNana.COM

Recommended outputs: CourseNana.COM

  • An algorithm that is able to generate contextualised representation in real time.
In [3]:
###################
# TODO: add your solution

###################

Part 4 (20%): Explore the Word Representations

The following tracks can be finished independently. You are encouraged to divide workload to each team member. CourseNana.COM

Track 4.1 (5%): Visualise the word representations by t-SNE

t-SNE is an algorithm to reduce dimentionality and commonly used to visualise high-dimension vectors. Use t-SNE to visualise the word representations. You may visualise up to 1000 words as t-SNE is highly computationally complex. CourseNana.COM

More about t-SNE: https://lvdmaaten.github.io/tsne/ CourseNana.COM

Recommended output: CourseNana.COM

  • A diagram by t-SNE based on representations of up to 1000 words.

Track 4.2 (5%): Visualise the Word Representations of Biomedical Entities by t-SNE

Instead of visualising the word representations of the entire vocabulary (or 1000 words that are selected at random), visualise the word representations of words which are biomedical entities. For example, fever, cough, diabetes etc. Based on the category of those biomedical entities, can you assign different colours to the entities and see if the entities from the same category can be clustered by t-SNE? For example, sinusitis and cough are both respirtory diseases so they should be assigned with the same colour and ideally their representations should be close to each other by t-SNE. Another example, Alzheimer and headache are neuralogical diseases which should be assigned by another colour. CourseNana.COM

Examples of biomedial ontology: https://www.ebi.ac.uk/ols/ontologies/hp and https://en.wikipedia.org/wiki/International_Classification_of_Diseases CourseNana.COM

Recommended output: CourseNana.COM

  • A diagram with colours by t-SNE based on representations of biomedical entities.

Track 4.3 (5%): Co-occurrence

  • What are the biomedical entities which frequently co-occur with COVID-19 (or coronavirus)?

Recommended outputs: CourseNana.COM

  • A sorted list of biomedical entities and description on how the entities are selected and sorted.

Track 4.4 (5%): Semantic Similarity

  • What are the biomedical entities which have closest semantic similarity COVID-19 (or coronavirus) based on word representations?

Recommended outputs: CourseNana.COM

  • A sorted list of biomedical entities and description on how the entities are selected and sorted.

Open Question (Optional): What else can you discover?

In [4]:
###################
# TODO: add your solution

###################

Part 5 (Bonus +10%): Open Challenge: Mining Biomedical Knowledge

A fundamental task in clinical/biomedical natural language processing is to extract intelligence from biomedical text corpus automatically and efficiently. More specifically, the intelligence may include biomedical entities mentioned in text, relations between biomedical entities, clinical features of patients, progression of diseases, all of which can be used to predict, understand and improve patients' outcomes. CourseNana.COM

This open challenge is to build a biomedical knowledge graph based on the CORD-19 dataset and mine useful information from it. We recommend the following steps but you are also encouraged to develop your solution from scratch. CourseNana.COM

Extract Biomedical Entities from Text

Extract biomedical entities (such as fever, cough, headache, lung cancer, heart attack) from text. Note that: CourseNana.COM

  • The biomedical entities may consist of multiple words. For example, heart attack, multiple myeloma etc.
  • The biomedical entities may be written in synoynms. For example, low blood pressure for hypotension.
  • The biomedical entities may be written in different forms. For example, smoking, smokes, smoked.

Extract Relations between Biomedical Entities

Extract relations between biomedical entities based on their appearance in text. You may define a relation between biomedical entities by one or more of the following criteria: CourseNana.COM

  • The biomedical entities frequentely co-occuer together.
  • The biomedical entities have similar word representations.
  • The biomedical entities have clear relations based on textual narratives. For example, "The most common symptoms for COVID-19 are fever and cough" so we know there are relations between "COVID-19", "fever" and "cough".

Build a Biomedical Knowledge Graph of COVID-19

Build a knoweledge graph based on the results from track 5.1 and 5.2 and visualise it. CourseNana.COM

In [5]:
###################
# TODO: add your solution

###################

Get in Touch with Our Experts

WeChat (微信) WeChat (微信)
Whatsapp WhatsApp
Word Representation in Biomedical Domain代写,Python代写,Natural language代写,自然语言处理代写,Tokenization代写,NLTK代写,Byte-Pair Encoding代写,N-gram代写,Word Representation in Biomedical Domain代编,Python代编,Natural language代编,自然语言处理代编,Tokenization代编,NLTK代编,Byte-Pair Encoding代编,N-gram代编,Word Representation in Biomedical Domain代考,Python代考,Natural language代考,自然语言处理代考,Tokenization代考,NLTK代考,Byte-Pair Encoding代考,N-gram代考,Word Representation in Biomedical Domainhelp,Pythonhelp,Natural languagehelp,自然语言处理help,Tokenizationhelp,NLTKhelp,Byte-Pair Encodinghelp,N-gramhelp,Word Representation in Biomedical Domain作业代写,Python作业代写,Natural language作业代写,自然语言处理作业代写,Tokenization作业代写,NLTK作业代写,Byte-Pair Encoding作业代写,N-gram作业代写,Word Representation in Biomedical Domain编程代写,Python编程代写,Natural language编程代写,自然语言处理编程代写,Tokenization编程代写,NLTK编程代写,Byte-Pair Encoding编程代写,N-gram编程代写,Word Representation in Biomedical Domainprogramming help,Pythonprogramming help,Natural languageprogramming help,自然语言处理programming help,Tokenizationprogramming help,NLTKprogramming help,Byte-Pair Encodingprogramming help,N-gramprogramming help,Word Representation in Biomedical Domainassignment help,Pythonassignment help,Natural languageassignment help,自然语言处理assignment help,Tokenizationassignment help,NLTKassignment help,Byte-Pair Encodingassignment help,N-gramassignment help,Word Representation in Biomedical Domainsolution,Pythonsolution,Natural languagesolution,自然语言处理solution,Tokenizationsolution,NLTKsolution,Byte-Pair Encodingsolution,N-gramsolution,