Llama LLM MCQ Questions and Answers
Q1. Who developed the Llama series of large language models?
A. OpenAIB. Meta (Facebook)
C. Google
D. Anthropic
Q2. What licensing approach did Meta take with Llama 2?
A. Fully proprietary with no public accessB. Open source under GPL license
C. Available for research only
D. Freely available for commercial use with certain restrictions
Q3. What technique was used to make Llama 2-Chat versions safer for deployment?
A. Supervised Fine-Tuning (SFT) onlyB. Reinforcement Learning from Human Feedback (RLHF)
C. Constitutional AI
D. Retrieval-Augmented Generation
Q4. What was a key architectural advancement in Llama 2 compared to the original Llama?
A. Significantly increased training data volumeB. Replacement of transformer architecture with a new design
C. Integration of multimodal capabilities
D. Doubled context length
Q5. Which size variants were released for Llama 2?
A. 7B, 13B, 34B, and 70B parametersB. 7B, 13B, and 70B parameters
C. 7B, 13B, 33B, and 65B parameters
D. 3B, 7B, 13B, and 70B parameters
Q6. What is "Llama-in-the-middle" evaluation?
A. Running Llama on edge devicesB. Testing Llama's resistance to prompt injection attacks
C. Using Llama to evaluate the outputs of other AI models
D. Training Llama on data extracted from the middle of documents
Q7. What tokenizer does Llama use?
A. BPE (Byte Pair Encoding)B. WordPiece
C. Unigram
D. SentencePiece
Q8. What is a key advantage of Llama models compared to some other similar-sized LLMs?
A. They can run efficiently on consumer hardwareB. They require no fine-tuning for most tasks
C. They have built-in image understanding capabilities
D. They can generate code in all programming languages
Q9. What notable improvement did Llama 2 make in terms of safety compared to Llama 1?
A. Addition of content filtering APIB. Pre-filtering of the training data
C. Enhanced safety specific RLHF training
D. Implementation of an external moderator model
Q10. Which of these is NOT a version of the Llama model family?
A. Llama 2-ChatB. Llama Code
C. Llama Vision
D. Llama 3
Q11. What framework is commonly used to quantize Llama models for more efficient deployment?
A. TensorFlow LiteB. ONNX Runtime
C. NVIDIA TensorRT
D. GGML/GGUF