Notes:
|
Notes: |
| Week 1 |
contrast loss |
| LR |
|
| 클라이 |
|
| size =512 |
|
| 2-3 channel (like RGB) |
|
| • limitations in clipping |
|
| Baseline? |
|
| • upstream |
|
| • downstream◦ AI hub |
|
| A foundation model for generalizable disease detection from retinal |
|
| self supervising |
|
| Week 2 |
stanford CS231 |
- A foundation model for generalizable disease detection from retinal
self supervising |
| Week 3 | |
RAD-DINO
An image encoder continually pre-trained with medical scans by adopting the DINOv2 image-only self-supervised learning (SSL) approach without relying on text supervision
- Masked Image Modeling (MIM)
- Self-supervised Instance Discrimination
Hybrid design enables the transferability of learned features to both global and local downstream tasks without requiring external text supervision
DINOv2
- a state-of-the-art image-only self-supervised learning method, optimized for pre-training
vision transformers (ViTs)
- the student is separately fed multiple crops (multi-crop) of an image, and must align its local
feature representations with those predicted by the teacher network for the global views of the image
- the teacher network is updated through the student’s parameters using exponential moving average (EMA), with gradient back-propagation limited to the student network
Fine Tuning
- Takes a model thats already been trained for a given task and then tuning that model to perform a similar second task
- Freeze all other layers aside from the one layer that you are modifying to see the effect of that modified layer
- freeze the weights and biases
- CONS:
- Time consuming and expensive to train as LLM are getting larger
LoRA
- a parameter-efficient fine-tuning technique
- for a large weight matrix
W in the original model, LoRA approximates its update ΔW by decomposing it into two much smaller matrices, A and B (ΔW = B @ A). During fine-tuning, you only train A and B, keeping W fixed. This dramatically reduces the number of trainable parameters
- smaller checkpoints
- reduces # trainable parameter per task