ACEEE Int. J. on Information Technology, Vol. 01, No. 03, Dec 2011 Improved Parallel Algorithm for Time Series Based Forecasting Using OTIS-Mesh Ashish Gupta Department of Computer Science and Engineering Indian School of Mines , Dhanbad, Jharkhand, 826004, India Email: ashish.parj@gmail.com Abstract-Forecasting always plays an important role in II. RELATED WORKS business, technology and many others and it helps organizations to increase profits, reduce lost sales and more Many Researchers has developed parallel algorithms for efficient production planning. A parallel algorithm for short term time series forecasting. Jana and sinha presented forecasting reported recently on OTIS-Mesh[9]. This parallel two parallel algorithms for forecasting implemented on linear algorithm requires 5( – 1) electronic steps and 4 optical array and complete binary tree model[14] and require steps. In this paper we present an improved parallel algorithm m+1steps and ( + steps respectively and for time series short term forecasting using OTIS-Mesh. This parallel algorithm requires 5(-1) electronic steps and 1 optical in extended case[14] it requires and step using same number of I/O ports as considered in [9] and steps respectively on ST array and ST shown to be an improvement over the parallel algorithm for time series forecasting using OTIS-Mesh [9]. tree. Both the algorithm based on weighted moving average techniques. Sudhanshu and jana presented parallel algorithm Index Terms- Time Series Forecasting, Parallel Algorithm, for forecasting based on OTIS-Mesh Network[9]. This parallel OTIS-Mesh. algorithm requires 5( -1) electronic moves and 4 optical moves[9]. In this paper we present an improved parallel I. INTRODUCTION algorithm for forecasting based on OTIS- Mesh network. This Many businesses, Organizations, get benefits through parallel based on weighted moving average technique and forecasting in terms of profit increment, reduce lost sales and requires 5( -1) electronic moves and 1 optical move. This also it helps to make production planning more efficient. parallel algorithm can be compared to parallel algorithm for Forecasting plays an important role in many areas such as forecasting as considered in [9]. weather forecasting, flood forecasting etc. Many researchers implemented forecasting on many different interconnection III. FORECASTING MODELS networks in parallel. Forecasting can also be map on OTIS network. Optical Transpose Interconnection System (OTIS) Forecasting models can be divided in to Qualitative and [1],[2] is basically a hybrid architecture which is benefits from Quantitative forecasting models. We discuss here the time optical and electronic connections. Optical connection is used series models of quantitative forecasting model. Among to connect the processors when the distance between the different quantitative forecasting models available for processors exceeds the few millimetres (in other package) successful implementation of decision making systems, time and electronic connections are used to connect the close series models are very popular. In these models, given a set processors (within the same physical package). Several of past observations, say d 1,d2,…,dm, the problem is to models exploit the idea of optical and electronic connections. estimate d(m + t) through extrapolation, where t(called the In an OTIS-Mesh, n 2 processors are divided into n groups lead time) is a small positive integer and usually set to 1. The where processors in each group follow × 2D mesh layout. observed data values usually show different patterns, such According to the OTIS rule , Gth group is connected to the Pth as constant process, cyclical and linear trend as shown in processor and Pth group is connected to the Gth processor. Figure 2. Several models are available for time series The Pattern of OTIS can be varied according to the forecasting. However, a particular model may be effective for interconnection among the processor in the group. The a specific pattern of the data, e.g. simple moving average is topology of OTIS-Mesh shown in figure 1. very suitable when the data exhibits a constant process. Weighted moving average is a well known time series model for short term forecasting which is suitable when the data exhibits a cyclical pattern around a constant trend. Exponential weighted moving average is more widely accepted technique method for short term forecasting than the (simple) weighted moving average. However, our motivation to parallelize weighted moving average with the fact that both the exponential weighted moving average and the simple moving Figure 1. OTIS-Mesh network average (MA) are the special cases of the weighted moving © 2011 ACEEE 11 DOI: 01.IJIT.01.03.519
ACEEE Int. J. on Information Technology, Vol. 01, No. 03, Dec 2011 average as will be discussed in section IV. Moreover, in order We then vary the window size (n) to obtain the corresponding to find the optimum value of the window size, it involves MSE with the newly calculated weighted moving averages. O(m) iterations where each iteration requires O(n) time for The same process is repeated for n = 1, 2, 3,..,m. The value of calculating (m – n + 1) weighted moving averages for a n for which MSE is least is chosen for forecasting. window size n and m data size. In this paper, we present an Special Cases improved parallel algorithm for short term forecasting which is based on weighted moving average of time series model A.Simple Moving Average: and mapped on OTIS-Mesh. This algorithm is shown to be In this method, equal importance is given to each data an improvement over the algorithm as considered in [9] and value. Hence we assign equal weight 1/n to all the data values requires 5( “1) electronic moves + 1 OTIS move using same and obtain the following moving average. number of I/O ports for m size data set and n window size using n2 processors. As we have discussed in section III, this method is best when the data pattern shows a constant process. B. Exponential Moving Average: In this method, the more recent observations are given a Figure 2(a). Constant data pattern larger weight to face smaller error and thus the weights are assigned in decreasing order. The formula for exponential moving average is as follows Figure 2(b). Trend data pattern where weight wi = α(1-α)n-i This method is suitable for a cyclical pattern around a constant trend and is widely accepted specially for business environment. However, the method suffers from the proper selection of the value of the parameter and there is no easy method to do it. Figure 2(c). Cyclic data pattern V. ALGORITHM The rest of the paper is organized as follows. Section IV describes the methodology for time series forecasting using Assume  = 1. Then (m – n + 1) weighted moving aver- weighted moving average with its special cases. In section V, ages are obtained from equation (1) for a given window size we present our proposed parallel algorithm. We also discuss n along with their error term as follows scalability issue for forecasting. IV. MEHODOLOGY We describe here the methodology for forecasting using weighted moving averageas given in [14]. In this method, for a set of n data values dt, dt+1,…, dt-n+1 and a set of positive weights w1, w2, …, wn, we calculate their weighted moving average at time t by the following formula Where wnwn-1wn-2,…,w10. We then use WM(t) to estimate the forecast value (t + ) at time t + , i.e., (t+) = WM(t) . The quality of the forecast depends on the selection of the window size (n). Therefore, in order to find the optimum value of n, we calculate m – n + 1 weighted averages for a specific value of n by sliding the window over the data values and the corresponding mean square error (MSE) is also calculated using © 2011 ACEEE 12 DOI: 01.IJIT.01.03.519
ACEEE Int. J. on Information Technology, Vol. 01, No. 03, Dec 2011 Step 4. /*Broadcast of weights*/ Perform row-wise broadcast on the weights stored in step 3 of each group in parallel and store them in W register. For a different value of n say ni, we require to Illustration 1: Content of W registers after step 4 shown in compute different set of (m – ni + 1) weighted moving averages figure 5. (as given above) for a maximum of m iterations. However, our target is to parallelize the above computation for a single iteration so that the overall time complexity can be significantly reduced. The basic idea is as follows. We initially feed the data values and the weight vector through the boundary processors. Then using suitable electronic and OTIS moves, they are stored in the D and W registers respectively. Next we calculate their products for each processor in parallel. The products are then used to form the local sum in each group which are finally accumulated using suitable electronic and OTIS moves to produce weighted Figure 5. Content of W registers after step 4 moving averages. Step 5.  Processors do in parallel Perform OTIS move on the content of W registers in each VI. PARALLEL ALGORITHM group. Step 1. /*Data Input*/ Illustration 2: Content of W registers after step 5 shown in 1.1Feed the data values di’s, to the boundary figure 6. processors in the 1st column position of each group Gxy, as shown in Figure 3. 1.2 Feed the weights wj’s, to the boundary processors in the 1st processor P11 of each group Gxy, as shown in figure 3. Figure 6. Content of W registers after step 5 Step 6. Processors do in parallel Form the products with the contents of D registers and W registers and store it in C-register. Step 7. /*Perform Summation*/ groups do steps 7.1 and 7.2 in parallel 7.1 Sum up with the content of C register column wise and store it in first row processors in each group. 7.2 Sum up with the content of W registers column wise and store it in first row processors in each group. Step 8. /*Parallel summation row wise*/ groups do steps 8.1 and 8.2 in parallel 8.1 Sum up with the content of C registers in first row of each group and store it in C registers of first processor P11 in each group. 8.2 Sum up with the content of W registers in first row of each group and store it in first processor P11 in each group. Illustration 3: We store the data , weights, and products in Figure 3. Feeding of data and weights Dij, Wij and Cij registers respectively where i indicates Step 2. /* Data distribution into D-registers */ register number in group and j indicates group number. Group Shift the data values row-wise to store them in D-registers in numbers are organized in row major order. C registers and W a pipeline fashion. Step 3. /* Broadcast of weights */ registers of processor P11 after step 8 shown in figure 7. Perform column-wise broadcast on the weights fed in step 1.2 in parallel and store them in W register. © 2011 ACEEE 13 DOI: 01.IJIT.01.03.519
ACEEE Int. J. on Information Technology, Vol. 01, No. 03, Dec 2011 Scalability: Now we consider any arbitrary size of the window to map the above algorithm on a OTIS-mesh. In other words, we consider the case when the window size is independent of the number of processors. For the sake of simplicity and without any loss of generality, let us assume it to be kn. Note that in this case, the size of the data set will be 2kn “ 1. Accordingly the data set is also partition into k subsets: {d1, d2,…, dn}, {d2, d3,…,dn+1}, …,{d2kn-n, d2kn-n+1, …, d2kn-1}. Given a subset of the data, its Step 9. Divide the content of C register and W register and corresponding weights, then we can partition the weight set store it in R registers of first processor of each group Gxy, into k subsets: {w1, w2, …, wn}, {wn+1, w2,…, w2n}…, {w(k-1)n+1, w(k-1)n+2, …, wkn}.weight subset is fed to the × OTIS-mesh. Remark 1: The final results emerge from the R- register of We then run the above algorithm (Parallel Algorithm) and first processor P11 in each Group (Gxy,P11) . The store the result temporarily. Next we input another data subset result after step 8 shown in table I. along with the corresponding weight subset, execute Parallel Algorithm and update the current result with the previously RESULTS calculated partial result. We describe here the time complexity required to map the This process is repeated k times to yield the final result. parallel algorithm on OTIS-Mesh on n 2 processors. This is obvious to note that this version of the algorithm requires 5k( “1) electronic moves + 1k OTIS moves, which is Time Complexity: Steps 2, 3, 4, 7, 8 requires steps k times more than time complexity of Parallel Algorithm. each. Step 5 requires 1 OTIS move. Rest of the steps requires constant time. Therefore the parallel algorithm requires electronic moves and 1 OTIS move. TABLEI. CONTENT OF C REGISTERS AND W REGISTERS OF PROCESSORS P 11 OF EACH GROUP AFTER STEP 8 window size is independent of the number of processors. For This process is repeated k times to yield the final result. This the sake of simplicity and without any loss of generality, let is obvious to note that this version of the algorithm requires us assume it to be kn. Note that in this case, the size of the electronic moves + 1k OTIS moves, which is data set will be 2kn -- 1. Accordingly the data set is also k times more than time complexity of Parallel Algorithm. partition into k subsets: {d1, d2,…, dn}, {d2, d3,…,dn+1}, …,{d2kn-n, d2kn-n+1, …, d2kn-1}. Given a subset of the data, its CONCLUSIONS corresponding weights, then we can partition the weight set into k subsets: {w1, w2, …, wn}, {wn+1, w2,…, w2n}…, {w(k-1)n+1, In this paper, we have presented an improved parallel algorithm for short term forecasting using weighted moving w(k-1)n+2, …, wkn}.weight subset is fed to the × OTIS- average technique. The algorithm is mapped on n 2 processor mesh. We then run the above algorithm (Parallel Algorithm) OTIS-Mesh. We have shown that it requires electronic and store the result temporarily. Next we input another data moves + 1 OTIS move. This Parallel algorithm shown to be subset along with the corresponding weight subset, execute an improvement over the Parallel algorithm using same Parallel Algorithm and update the current result with the number of I/O ports as considered in [9].The algorithm previously calculated partial result. © 2011 ACEEE 14 DOI: 01.IJIT.01.03. 519
ACEEE Int. J. on Information Technology, Vol. 01, No. 03, Dec 2011 is also shown to be scalable. [3]. Wang C. F. and Sahni S.,”Image processing on the OTIS-Mesh optoelectronic Computer,” IEEE Trans. On Parallel and Distributed Systems, 11, 97-109, 2000. COMPARATIVE RESULT ANALYSIS [4]. Wang C. F. and Sahni S.,. “Matrix multiplication on the OTIS- In Table II we compare the time complexity in terms of Mesh optoelectronic computer,” IEEE Transactions on Computers, electronic moves and optical moves required by the parallel 50(July 2001), 635 – 646, 2001. algorithms mapped on OTIS-Mesh network. In our proposed [5]. Wang C. F. and Sahni S, “Basic operations on the OTIS-Mesh optoelectronic computer,” IEEE Trans. on Parallel and Distributed algorithm, we require same time complexity in terms of Systems 9(Dec. 1998) 1226–1998, 1998. electronic moves as compared to [9] but we require only 1 [6]. Wang C. F. and Sahni S, “BPC Permutations on the OTIS- optical move. Hypercube, optoelectronic computer,” Informatica,22(3) 1998. T ABLE II. [7]. Jana P. K. and Sinha B. P., “An Improved Parallel Prefix C OMPARISON OF OTIS-MESH BASED PARALLEL ALGORITHMS Algorithm on OTIS-Mesh,” Parallel Processing Letters, 2006, 16, 429-440. [8]. Jana P. K, “Polynomial interpolation and polynomial root finding on OTIS-Mesh,” Parallel Computing, 32(4), 301-312, 2006. [9]. Sudhanshu Kumar Jha, Prasanta K. Jana, “Parallel Algorithm for Timeseries Based Forecasting using OTIS-Mesh,” international journal of Computer Application(0975-8887) 2010, Volume 1 – FUTURE WORKS No. 26. [10]. Lucas K. T. and Jana P. K., “An efficient parallel sorting For implementation of forecasting in parallel architecture, algorithm on OTIS Mesh of Trees,” Proc. IEEE Intl. Advance proper shifting of data should be done properly. So we should Computing Conference , (6-7 March, 2009), Patiala, India, 175- try to explore the parallel architecture for proper shifting of 180 ,2009. data. We can also exploit the properties of OTIS based [11]. Lucas K. T., Mallick D. K. and Jana P. K.,. “Parallel Algorithm networks. Therefore we should try to map time series for conflict graph on OTIS triangular array,” Lecture Notes in forecasting on other OTIS based networks such as OTIS- Computer Science, 4904, 274-279, 2008. Mesh Of Trees, OTIS-Hypercube etc. [12]. Rajasekaran S. and Sahni S, “Randomized routing selection, and sorting on the OTIS-Mesh,” IEEE Transaction on Parallel and Distributed Systems, 9, 833-840, 1998. REFERENCES [13]. Wheelwright S. C., and Makridakis S.,”Forecasting methods [1]. G.C Marsden, P.J Marchand, P. Harvey and S, C, Esener, for management,” 1980 ,John Wiley and Sons. “Optical transpose interconnection system architecture,” Optical [14]. Jana P. K., Sinha B. P., “Fast Parallel Algorithms for Letters, Vol 18, No. 13,pp 1083-1085,july,1993. Forecasting,” Computers Math. Applic. 34(9) 39-49, 1997. [2]. C. F. Wang and S. Sahani, “OTIS optoelectronic computers” [15]. Nassimi, D., and Sahni, S.,. “Bitonic sort on a Mesh connected Parallel Computation using Optical Interconnection,”K Li, Y Pan parallel computer,” IEEE Trans. Comput. C-28(1) ,1979. and S.Q Zhang, Eds.Khuwer Academic 1988. © 2011 ACEEE 15 DOI: 01.IJIT.01.03. 519

Improved Parallel Algorithm for Time Series Based Forecasting Using OTIS-Mesh

  • 1.
    ACEEE Int. J.on Information Technology, Vol. 01, No. 03, Dec 2011 Improved Parallel Algorithm for Time Series Based Forecasting Using OTIS-Mesh Ashish Gupta Department of Computer Science and Engineering Indian School of Mines , Dhanbad, Jharkhand, 826004, India Email: ashish.parj@gmail.com Abstract-Forecasting always plays an important role in II. RELATED WORKS business, technology and many others and it helps organizations to increase profits, reduce lost sales and more Many Researchers has developed parallel algorithms for efficient production planning. A parallel algorithm for short term time series forecasting. Jana and sinha presented forecasting reported recently on OTIS-Mesh[9]. This parallel two parallel algorithms for forecasting implemented on linear algorithm requires 5( – 1) electronic steps and 4 optical array and complete binary tree model[14] and require steps. In this paper we present an improved parallel algorithm m+1steps and ( + steps respectively and for time series short term forecasting using OTIS-Mesh. This parallel algorithm requires 5(-1) electronic steps and 1 optical in extended case[14] it requires and step using same number of I/O ports as considered in [9] and steps respectively on ST array and ST shown to be an improvement over the parallel algorithm for time series forecasting using OTIS-Mesh [9]. tree. Both the algorithm based on weighted moving average techniques. Sudhanshu and jana presented parallel algorithm Index Terms- Time Series Forecasting, Parallel Algorithm, for forecasting based on OTIS-Mesh Network[9]. This parallel OTIS-Mesh. algorithm requires 5( -1) electronic moves and 4 optical moves[9]. In this paper we present an improved parallel I. INTRODUCTION algorithm for forecasting based on OTIS- Mesh network. This Many businesses, Organizations, get benefits through parallel based on weighted moving average technique and forecasting in terms of profit increment, reduce lost sales and requires 5( -1) electronic moves and 1 optical move. This also it helps to make production planning more efficient. parallel algorithm can be compared to parallel algorithm for Forecasting plays an important role in many areas such as forecasting as considered in [9]. weather forecasting, flood forecasting etc. Many researchers implemented forecasting on many different interconnection III. FORECASTING MODELS networks in parallel. Forecasting can also be map on OTIS network. Optical Transpose Interconnection System (OTIS) Forecasting models can be divided in to Qualitative and [1],[2] is basically a hybrid architecture which is benefits from Quantitative forecasting models. We discuss here the time optical and electronic connections. Optical connection is used series models of quantitative forecasting model. Among to connect the processors when the distance between the different quantitative forecasting models available for processors exceeds the few millimetres (in other package) successful implementation of decision making systems, time and electronic connections are used to connect the close series models are very popular. In these models, given a set processors (within the same physical package). Several of past observations, say d 1,d2,…,dm, the problem is to models exploit the idea of optical and electronic connections. estimate d(m + t) through extrapolation, where t(called the In an OTIS-Mesh, n 2 processors are divided into n groups lead time) is a small positive integer and usually set to 1. The where processors in each group follow × 2D mesh layout. observed data values usually show different patterns, such According to the OTIS rule , Gth group is connected to the Pth as constant process, cyclical and linear trend as shown in processor and Pth group is connected to the Gth processor. Figure 2. Several models are available for time series The Pattern of OTIS can be varied according to the forecasting. However, a particular model may be effective for interconnection among the processor in the group. The a specific pattern of the data, e.g. simple moving average is topology of OTIS-Mesh shown in figure 1. very suitable when the data exhibits a constant process. Weighted moving average is a well known time series model for short term forecasting which is suitable when the data exhibits a cyclical pattern around a constant trend. Exponential weighted moving average is more widely accepted technique method for short term forecasting than the (simple) weighted moving average. However, our motivation to parallelize weighted moving average with the fact that both the exponential weighted moving average and the simple moving Figure 1. OTIS-Mesh network average (MA) are the special cases of the weighted moving © 2011 ACEEE 11 DOI: 01.IJIT.01.03.519
  • 2.
    ACEEE Int. J.on Information Technology, Vol. 01, No. 03, Dec 2011 average as will be discussed in section IV. Moreover, in order We then vary the window size (n) to obtain the corresponding to find the optimum value of the window size, it involves MSE with the newly calculated weighted moving averages. O(m) iterations where each iteration requires O(n) time for The same process is repeated for n = 1, 2, 3,..,m. The value of calculating (m – n + 1) weighted moving averages for a n for which MSE is least is chosen for forecasting. window size n and m data size. In this paper, we present an Special Cases improved parallel algorithm for short term forecasting which is based on weighted moving average of time series model A.Simple Moving Average: and mapped on OTIS-Mesh. This algorithm is shown to be In this method, equal importance is given to each data an improvement over the algorithm as considered in [9] and value. Hence we assign equal weight 1/n to all the data values requires 5( “1) electronic moves + 1 OTIS move using same and obtain the following moving average. number of I/O ports for m size data set and n window size using n2 processors. As we have discussed in section III, this method is best when the data pattern shows a constant process. B. Exponential Moving Average: In this method, the more recent observations are given a Figure 2(a). Constant data pattern larger weight to face smaller error and thus the weights are assigned in decreasing order. The formula for exponential moving average is as follows Figure 2(b). Trend data pattern where weight wi = α(1-α)n-i This method is suitable for a cyclical pattern around a constant trend and is widely accepted specially for business environment. However, the method suffers from the proper selection of the value of the parameter and there is no easy method to do it. Figure 2(c). Cyclic data pattern V. ALGORITHM The rest of the paper is organized as follows. Section IV describes the methodology for time series forecasting using Assume  = 1. Then (m – n + 1) weighted moving aver- weighted moving average with its special cases. In section V, ages are obtained from equation (1) for a given window size we present our proposed parallel algorithm. We also discuss n along with their error term as follows scalability issue for forecasting. IV. MEHODOLOGY We describe here the methodology for forecasting using weighted moving averageas given in [14]. In this method, for a set of n data values dt, dt+1,…, dt-n+1 and a set of positive weights w1, w2, …, wn, we calculate their weighted moving average at time t by the following formula Where wnwn-1wn-2,…,w10. We then use WM(t) to estimate the forecast value (t + ) at time t + , i.e., (t+) = WM(t) . The quality of the forecast depends on the selection of the window size (n). Therefore, in order to find the optimum value of n, we calculate m – n + 1 weighted averages for a specific value of n by sliding the window over the data values and the corresponding mean square error (MSE) is also calculated using © 2011 ACEEE 12 DOI: 01.IJIT.01.03.519
  • 3.
    ACEEE Int. J.on Information Technology, Vol. 01, No. 03, Dec 2011 Step 4. /*Broadcast of weights*/ Perform row-wise broadcast on the weights stored in step 3 of each group in parallel and store them in W register. For a different value of n say ni, we require to Illustration 1: Content of W registers after step 4 shown in compute different set of (m – ni + 1) weighted moving averages figure 5. (as given above) for a maximum of m iterations. However, our target is to parallelize the above computation for a single iteration so that the overall time complexity can be significantly reduced. The basic idea is as follows. We initially feed the data values and the weight vector through the boundary processors. Then using suitable electronic and OTIS moves, they are stored in the D and W registers respectively. Next we calculate their products for each processor in parallel. The products are then used to form the local sum in each group which are finally accumulated using suitable electronic and OTIS moves to produce weighted Figure 5. Content of W registers after step 4 moving averages. Step 5.  Processors do in parallel Perform OTIS move on the content of W registers in each VI. PARALLEL ALGORITHM group. Step 1. /*Data Input*/ Illustration 2: Content of W registers after step 5 shown in 1.1Feed the data values di’s, to the boundary figure 6. processors in the 1st column position of each group Gxy, as shown in Figure 3. 1.2 Feed the weights wj’s, to the boundary processors in the 1st processor P11 of each group Gxy, as shown in figure 3. Figure 6. Content of W registers after step 5 Step 6. Processors do in parallel Form the products with the contents of D registers and W registers and store it in C-register. Step 7. /*Perform Summation*/ groups do steps 7.1 and 7.2 in parallel 7.1 Sum up with the content of C register column wise and store it in first row processors in each group. 7.2 Sum up with the content of W registers column wise and store it in first row processors in each group. Step 8. /*Parallel summation row wise*/ groups do steps 8.1 and 8.2 in parallel 8.1 Sum up with the content of C registers in first row of each group and store it in C registers of first processor P11 in each group. 8.2 Sum up with the content of W registers in first row of each group and store it in first processor P11 in each group. Illustration 3: We store the data , weights, and products in Figure 3. Feeding of data and weights Dij, Wij and Cij registers respectively where i indicates Step 2. /* Data distribution into D-registers */ register number in group and j indicates group number. Group Shift the data values row-wise to store them in D-registers in numbers are organized in row major order. C registers and W a pipeline fashion. Step 3. /* Broadcast of weights */ registers of processor P11 after step 8 shown in figure 7. Perform column-wise broadcast on the weights fed in step 1.2 in parallel and store them in W register. © 2011 ACEEE 13 DOI: 01.IJIT.01.03.519
  • 4.
    ACEEE Int. J.on Information Technology, Vol. 01, No. 03, Dec 2011 Scalability: Now we consider any arbitrary size of the window to map the above algorithm on a OTIS-mesh. In other words, we consider the case when the window size is independent of the number of processors. For the sake of simplicity and without any loss of generality, let us assume it to be kn. Note that in this case, the size of the data set will be 2kn “ 1. Accordingly the data set is also partition into k subsets: {d1, d2,…, dn}, {d2, d3,…,dn+1}, …,{d2kn-n, d2kn-n+1, …, d2kn-1}. Given a subset of the data, its Step 9. Divide the content of C register and W register and corresponding weights, then we can partition the weight set store it in R registers of first processor of each group Gxy, into k subsets: {w1, w2, …, wn}, {wn+1, w2,…, w2n}…, {w(k-1)n+1, w(k-1)n+2, …, wkn}.weight subset is fed to the × OTIS-mesh. Remark 1: The final results emerge from the R- register of We then run the above algorithm (Parallel Algorithm) and first processor P11 in each Group (Gxy,P11) . The store the result temporarily. Next we input another data subset result after step 8 shown in table I. along with the corresponding weight subset, execute Parallel Algorithm and update the current result with the previously RESULTS calculated partial result. We describe here the time complexity required to map the This process is repeated k times to yield the final result. parallel algorithm on OTIS-Mesh on n 2 processors. This is obvious to note that this version of the algorithm requires 5k( “1) electronic moves + 1k OTIS moves, which is Time Complexity: Steps 2, 3, 4, 7, 8 requires steps k times more than time complexity of Parallel Algorithm. each. Step 5 requires 1 OTIS move. Rest of the steps requires constant time. Therefore the parallel algorithm requires electronic moves and 1 OTIS move. TABLEI. CONTENT OF C REGISTERS AND W REGISTERS OF PROCESSORS P 11 OF EACH GROUP AFTER STEP 8 window size is independent of the number of processors. For This process is repeated k times to yield the final result. This the sake of simplicity and without any loss of generality, let is obvious to note that this version of the algorithm requires us assume it to be kn. Note that in this case, the size of the electronic moves + 1k OTIS moves, which is data set will be 2kn -- 1. Accordingly the data set is also k times more than time complexity of Parallel Algorithm. partition into k subsets: {d1, d2,…, dn}, {d2, d3,…,dn+1}, …,{d2kn-n, d2kn-n+1, …, d2kn-1}. Given a subset of the data, its CONCLUSIONS corresponding weights, then we can partition the weight set into k subsets: {w1, w2, …, wn}, {wn+1, w2,…, w2n}…, {w(k-1)n+1, In this paper, we have presented an improved parallel algorithm for short term forecasting using weighted moving w(k-1)n+2, …, wkn}.weight subset is fed to the × OTIS- average technique. The algorithm is mapped on n 2 processor mesh. We then run the above algorithm (Parallel Algorithm) OTIS-Mesh. We have shown that it requires electronic and store the result temporarily. Next we input another data moves + 1 OTIS move. This Parallel algorithm shown to be subset along with the corresponding weight subset, execute an improvement over the Parallel algorithm using same Parallel Algorithm and update the current result with the number of I/O ports as considered in [9].The algorithm previously calculated partial result. © 2011 ACEEE 14 DOI: 01.IJIT.01.03. 519
  • 5.
    ACEEE Int. J.on Information Technology, Vol. 01, No. 03, Dec 2011 is also shown to be scalable. [3]. Wang C. F. and Sahni S.,”Image processing on the OTIS-Mesh optoelectronic Computer,” IEEE Trans. On Parallel and Distributed Systems, 11, 97-109, 2000. COMPARATIVE RESULT ANALYSIS [4]. Wang C. F. and Sahni S.,. “Matrix multiplication on the OTIS- In Table II we compare the time complexity in terms of Mesh optoelectronic computer,” IEEE Transactions on Computers, electronic moves and optical moves required by the parallel 50(July 2001), 635 – 646, 2001. algorithms mapped on OTIS-Mesh network. In our proposed [5]. Wang C. F. and Sahni S, “Basic operations on the OTIS-Mesh optoelectronic computer,” IEEE Trans. on Parallel and Distributed algorithm, we require same time complexity in terms of Systems 9(Dec. 1998) 1226–1998, 1998. electronic moves as compared to [9] but we require only 1 [6]. Wang C. F. and Sahni S, “BPC Permutations on the OTIS- optical move. Hypercube, optoelectronic computer,” Informatica,22(3) 1998. T ABLE II. [7]. Jana P. K. and Sinha B. P., “An Improved Parallel Prefix C OMPARISON OF OTIS-MESH BASED PARALLEL ALGORITHMS Algorithm on OTIS-Mesh,” Parallel Processing Letters, 2006, 16, 429-440. [8]. Jana P. K, “Polynomial interpolation and polynomial root finding on OTIS-Mesh,” Parallel Computing, 32(4), 301-312, 2006. [9]. Sudhanshu Kumar Jha, Prasanta K. Jana, “Parallel Algorithm for Timeseries Based Forecasting using OTIS-Mesh,” international journal of Computer Application(0975-8887) 2010, Volume 1 – FUTURE WORKS No. 26. [10]. Lucas K. T. and Jana P. K., “An efficient parallel sorting For implementation of forecasting in parallel architecture, algorithm on OTIS Mesh of Trees,” Proc. IEEE Intl. Advance proper shifting of data should be done properly. So we should Computing Conference , (6-7 March, 2009), Patiala, India, 175- try to explore the parallel architecture for proper shifting of 180 ,2009. data. We can also exploit the properties of OTIS based [11]. Lucas K. T., Mallick D. K. and Jana P. K.,. “Parallel Algorithm networks. Therefore we should try to map time series for conflict graph on OTIS triangular array,” Lecture Notes in forecasting on other OTIS based networks such as OTIS- Computer Science, 4904, 274-279, 2008. Mesh Of Trees, OTIS-Hypercube etc. [12]. Rajasekaran S. and Sahni S, “Randomized routing selection, and sorting on the OTIS-Mesh,” IEEE Transaction on Parallel and Distributed Systems, 9, 833-840, 1998. REFERENCES [13]. Wheelwright S. C., and Makridakis S.,”Forecasting methods [1]. G.C Marsden, P.J Marchand, P. Harvey and S, C, Esener, for management,” 1980 ,John Wiley and Sons. “Optical transpose interconnection system architecture,” Optical [14]. Jana P. K., Sinha B. P., “Fast Parallel Algorithms for Letters, Vol 18, No. 13,pp 1083-1085,july,1993. Forecasting,” Computers Math. Applic. 34(9) 39-49, 1997. [2]. C. F. Wang and S. Sahani, “OTIS optoelectronic computers” [15]. Nassimi, D., and Sahni, S.,. “Bitonic sort on a Mesh connected Parallel Computation using Optical Interconnection,”K Li, Y Pan parallel computer,” IEEE Trans. Comput. C-28(1) ,1979. and S.Q Zhang, Eds.Khuwer Academic 1988. © 2011 ACEEE 15 DOI: 01.IJIT.01.03. 519