Key: (M)onday (T)uesday (W)ednesday Thu(R)sday (F)riday (S)aturday S(U)nday
Also note: order of log is reverse chronological order, newest first.
Note: The above snippet is stolen from my advisor’s 2nd year PhD progress log 🤣
- Sprint 1: Detail the design for baselines.
- Sprint 1: Detail the design for Implementation 1 and Implementation 2.
- Study what model can take a set of premise as input.
- The answer is no. We have to customize.
- Presentation at group meeting.
- Problem formalization like FST.
- Study how other language games work. (Daily games / Philosophical games)
- Read the paper Xiachong recommended to me.
- Corpus study:
- I find multi-hop reasoning is NOT that desired.
- I also find QGen in my existing work needs a lot of improvement.
- Think about abductive reasoning.
- Receive meta-review.
- The opinion/score is not too bad, but we still have doubts that haven’t been clarified. 😞
- Still thinking about Smarter.
- ⭐️ Tried ChatGPT on my task.
- Pretty impressive! Plenty of useful evidence can be extracted to facilitate its reasoning.
- Weakness is logic (e.g. negation). Still cannot make correct prediction when I inform its “belief” on a fact (by negating it).
- Yuxi said NN is still trained on a statistical manner. Not logic. I said Honghua Zhang’s paper about the paradox explained it.
- Some ideas in our next work is endorsed. By trying and interacting …
- I think ChatGPT provides a new interface to interact with the model, by updating its belief/opinion with natural language.
- Think about Smarter. Write down a few hypotheses to be testes.
- Read Finale Doshi-Velez’s papers as Level 2-3 reading. Min said she is going to teach CS6216 next Sem.
- Read papers. EMNLP ‘22 paper recommended by Min. Flan-T5 paper recommended by Xiachong.
11.21 - 11.29:
In the past 1 week
- Quarentine @ Hotel.
- Quatentine @ Home.
- Drafted the rebuttal together with Min.
- Upload my GAP hours claim.
Flight, arrival and start of quarentine.
Return the books to the library.
- PCR 🧪
- Thinking about language game.
<Take a rest>
- Help Saurabh edit his SOP.
- CS5228 TA slides for assignments 2.
Prepare for the research meeting w/ Min.
Meet Min. Advice received: we have missed level 0 reading about Emoji. I cannot decide it is a dataset/model/linguistic theory paper before I know the knowledge. It is better to have a depth first search than breadth first search in a conference paper. Reviewers need to decide which type of the paper it is before accept it.
Besides, Min also thinks information gain is a good idea.
- Corpus study for ELCo.
- Busy figuring out my PCR tests 🧪.
<ELCo> and <Smarter>
- Still try to formalize ELCo.
- Talk about the <Smarter> project with Hongfu.
- Meet Zi Yun, get her dataset.
- I think harder about the problem formalization. Read Vered’s and Chris’ paper and find they both perform classifications.
- I realized in our existing setting, the assumptions for ranking and classification are equivalent.
- With a nice coincidence, I introduced Zixu to our group in the social session as he wanted to learn sequential modeling from us.
- <Didn’t do much on my own research>, need to push up tomorrow.
- Continue reading Liangming’s references about information gain and try to make sense from them.
- Sit in Xiachong and Taha’s meeting and know what they are doing (and the state-of-the-art in their field).
- Attend Yuxi’s meeting w/ Min.
- Talked w/ Liangming. He is interested in my next work. He offers two points: (1) My ongoing topic is very promising w.r.t. his view. (2) A max info gain perspective … leads to the next point👇
- Read the papers from Goodman’s group and William’s group but haven’t really understood.
- Read papers (discourse / machine learning / Min’s 1998 verb paper)
- 2 hour research discussion with Jielin. She consulted me the training for a vision model for designers. I take analogy from different levels in NLP and advise her to try to stratify her problem.
- Basketball 500kCal.
- Finish discussion period for AAAI-23 review.
- Update my webpage and initialize 2022 log starting today (many of them are offline, too lazy to move online).