iwwooIES101 嵌入式系統基本軟體技術
(5) 嵌入式系統(即時)的作業核心架構(解題說明)
-----------------------------------------------------------------------------------------------------------------
5-01 ----------------------------------------------------------------------------------------------------------
實作概念題
-----------------------------------------------------------------------------------------------------------------
5-02 ----------------------------------------------------------------------------------------------------------
Cramfs
壓縮唯讀閃存檔案系統(compressed ROM file system 簡稱 cramfs)是一開放式的 Linux 檔案系統,目的是更簡單更有效率。
File sizes are limited to less than 16MB.
The compressed ROM file system (or cramfs) is a free (GPL'ed) read-only Linux file system designed for simplicity and space-efficiency. It is mainly used in embeddeds and small-footprint systems.
Unlike a compressed image of a conventional file system, a cramfs image can be used as it is i.e., without first decompressing it. For this reason, some Linux distributions use cramfs for initrd images (Debian 3.1 in particular) and installation images (SUSE Linux in particular), where there are constraints on memory and image size.
Design
Files on cramfs file systems are zlib-compressed one page at a time to allow random read access. The metadata is not compressed, but is expressed in a terse representation that is more space-efficient than conventional file systems.
The file system is intentionally read-only to simplify its design; random write access for compressed files is difficult to implement. cramfs ships with a utility (mkcramfs) to pack files into new cramfs images.
File sizes are limited to less than 16MB.
Maximum file system size is a little under 272MB. (The last file on the file system must begin before the 256MB block, but can extend past it.)
-----------------------------------------------------------------------------------------------------------------
5-03 ----------------------------------------------------------------------------------------------------------
Virtual machine
Dalvik virtual machine
-----------------------------------------------------------------------------------------------------------------
5-04 ----------------------------------------------------------------------------------------------------------
常識題
-----------------------------------------------------------------------------------------------------------------
5-05----------------------------------------------------------------------------------------------------------
- 除非產品有hard real time的應用,一般狀況我們不允許巢狀中斷 (ISR在執行時允許發生中斷,於是ISR可能被中斷,CPU去執行另一個ISR),亦即ISR執行時期禁止中斷產生,這是因為巢狀中斷會出現很複雜的狀況。此時可以透過設定CPU允許產生中斷的優先等級,例如一般中斷的優先等級都是6,而不允許任何等待或丟失的中斷事件則設為7,平常ISR執行時設定CPU僅允許優先等級7以上的中斷產生,則所有優先等級6的中斷不會產生巢狀中斷,但優先等級7的中斷在任何時候都保證會被處理。
-----------------------------------------------------------------------------------------------------------------
5-06 ----------------------------------------------------------------------------------------------------------
BSS Segment:在採用段式內存管理的架構中,BSS段(bss segment)通常是指用來存放程序中未初始化的全局變數的一塊內存區域。BSS是英文Block Started by Symbol的簡稱。BSS段屬於靜態內存分配。.bss section 的空間結構類似於 stack
靜態變數、未顯式初始化、在變數使用前由運行時初始化為零。
code segment / text segment:在採用段式內存管理的架構中,代碼段(code segment / text segment)通常是指用來存放程序執行代碼的一塊內存區域。這部分區域的大小在程序運行前就已經確定,並且內存區域通常屬於只讀, 某些架構也允許代碼段為可寫,即允許自修改程序。 在代碼段中,也有可能包含一些只讀的常數變數,例如字元串常量等。
Stack Segment:堆疊段(stack segment)通常是指採用堆疊方式工作的一段內存區域。在採用段式內存管理方式進行程序內存分配的架構中,堆疊段用來存放局部變數和函數返回地址。堆疊段是在程序運行時動態分配使用,只需要通過棧頂指針即可訪問。目前大多數CPU中都有專用暫存器可以被用來存放棧頂地址。
-----------------------------------------------------------------------------------------------------------------
5-07 ----------------------------------------------------------------------------------------------------------
實作概念題
-----------------------------------------------------------------------------------------------------------------
5-08 ----------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------
5-09 ----------------------------------------------------------------------------------------------------------
Ref. 5-06
-----------------------------------------------------------------------------------------------------------------
5-10 ----------------------------------------------------------------------------------------------------------
(c)EDF最耗系統資源
-----------------------------------------------------------------------------------------------------------------
5-11 ----------------------------------------------------------------------------------------------------------
無關:TLB區段
轉譯後備緩衝區(英文:Translation Lookaside Buffer,首字母縮略字:TLB),在中國大陸也被翻譯為頁表快取、轉址旁路快取,為CPU的一種快取,由記憶體管理單元用於改進虛擬位址到實體位址的轉譯速度。目前所有的桌上型及伺服器型處理器(如 x86)皆使用TLB。TLB具有固定數目的空間槽,用於存放將虛擬位址對映至實體位址的分頁表條目。為典型的可定址內容記憶體(content-addressable memory,首字母縮略字:CAM)。其搜尋鍵碼為虛擬記憶體位址,其搜尋結果為實體位址。如果請求的虛擬位址在TLB中存在,CAM 將給出一個非常快速的匹配結果,之後就可以使用得到的實體位址存取記憶體。如果請求的虛擬位址不在 TLB 中,就會使用分頁表進行虛實位址轉換,而分頁表的存取速度比TLB慢很多。有些系統允許分頁表被交換到次級記憶體,那麼虛實位址轉換可能要花非常長的時間。
-----------------------------------------------------------------------------------------------------------------
5-12 ----------------------------------------------------------------------------------------------------------
臨界區段(Critical section)
當有執行緒進入臨界區段時,其他執行緒或是行程必須等待(例如:bounded waiting 等待法),有一些同步的機制必須在臨界區段的進入點與離開點實現,以確保這些共用資源是被互斥或的使用,例如:semaphore。
一個最簡單的實現方法就是當執行緒(Thread)進入臨界區段時,禁止改變處理器;在uni-processor系統上,可以用"禁止中斷(CLI)"來完成,避免在執行上下文交換(Context switching)的時候發生系統調用(System Call);當離開臨界區段時,處理器回復原先的狀態。
訊號機/信號標(英語:Semaphore)又稱為號誌,它以一個整數變數,提供訊號,以確保在平行計算環境中,不同行程在存取共享資源時,不會發生衝突。是一種不需要使用忙碌等待(busy waiting)的一種方法。
信號標的概念是由荷蘭電腦科學家艾茲格·迪傑斯特拉(Edsger W. Dijkstra)發明的,廣泛的應用於不同的作業系統中。在系統中,給予每一個行程一個信號標,代表每個行程目前的狀態,未得到控制權的行程會在特定地方被強迫停下來,等待可以繼續進行的訊號到來。如果信號量是一個任意的整數,通常被稱為計數訊號量(Counting semaphore),或一般訊號量(general semaphore);如果信號量只有二進位的0或1,稱為二進位訊號量(binary semaphore)。在linux系中,二進位訊號量(binary semaphore)又稱Mutex。
[] 語法
計數訊號量具備兩種操作動作,之前稱為 V(又稱signal())與 P(wait())。 V操作會增加信號標 S的數值,P操作會減少它。
運作方式:
- 初始化,給與它一個非負數的整數值。
Spinlock
In software engineering, a spinlock is a lock where the thread simply waits in a loop ("spins") repeatedly checking until the lock becomes available. Since the thread remains active but isn't performing a useful task, the use of such a lock is a kind of busy waiting. Once acquired, spinlocks will usually be held until they are explicitly released, although in some implementations they may be automatically released if the thread being waited on (that which holds the lock) blocks, or "goes to sleep".
Spinlocks are efficient if threads are only likely to be blocked for a short period, as they avoid overhead from operating system process re-scheduling or context switching. For this reason, spinlocks are often used inside operating system kernels. However, spinlocks become wasteful if held for longer durations, preventing other threads from running and requiring re-scheduling. The longer a lock is held by a thread, the greater the risk that it will be interrupted by the OS scheduler while holding the lock. If this happens, other threads will be left "spinning" (repeatedly trying to acquire the lock), while the thread holding the lock is not making progress towards releasing it. The result is an indefinite postponement until the thread holding the lock can finish and release it. This is especially true on a single-processor system, where each waiting thread of the same priority is likely to waste its quantum (allocated time where a thread can run) spinning until the thread that holds the lock is finally finished.
Implementing spin locks correctly is difficult because one must take into account the possibility of simultaneous access to the lock to prevent race conditions. Generally this is only possible with special assembly language instructions, such as atomic test-and-set operations, and cannot be easily implemented in high-level programming languages or those languages which don't support truly atomic operations.[1] On architectures without such operations, or if high-level language implementation is required, a non-atomic locking algorithm may be used, e.g. Peterson's algorithm. But note that such an implementation may require more memory than a spinlock, be slower to allow progress after unlocking, and may not be implementable in a high-level language if out-of-order execution is allowed.
-----------------------------------------------------------------------------------------------------------------
5-13 ----------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------
5-14 ----------------------------------------------------------------------------------------------------------
上下文交換(英語:context switch),又稱環境切換,電腦術語,是一個儲存和重建CPU的狀態 (內文),因此令多個行程(process)可以分享單一CPU資源的計算過程。要交換CPU上的行程時,必需先行儲存目前行程的狀態,再將欲執行的行程之狀態讀回CPU中。[1]
-----------------------------------------------------------------------------------------------------------------
5-15 ----------------------------------------------------------------------------------------------------------
沒有留言:
張貼留言