SUPPORT THE WORK

GetWiki

optimal control

ARTICLE SUBJECTS
aesthetics  →
being  →
complexity  →
database  →
enterprise  →
ethics  →
fiction  →
history  →
internet  →
knowledge  →
language  →
licensing  →
linux  →
logic  →
method  →
news  →
perception  →
philosophy  →
policy  →
purpose  →
religion  →
science  →
sociology  →
software  →
truth  →
unix  →
wiki  →
ARTICLE TYPES
essay  →
feed  →
help  →
system  →
wiki  →
ARTICLE ORIGINS
critical  →
discussion  →
forked  →
imported  →
original  →
optimal control
[ temporary import ]
please note:
- the content below is remote from Wikipedia
- it has been imported raw for GetWiki
{{Short description|Mathematical way of attaining a desired output from a dynamic system}}(File:Optimal Control Luus.png|thumb|Optimal control problem benchmark (Luus) with an integral objective, inequality, and differential constraint)Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized.BOOK, Ross, Isaac, A primer on Pontryagin's principle in optimal control, Collegiate Publishers, 2015, 978-0-9843571-0-9, San Francisco, 625106088, It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the Moon with minimum fuel expenditure.BOOK, David G., Luenberger, David Luenberger, Introduction to Dynamic Systems,weblink limited, New York, John Wiley & Sons, 1979, 0-471-02594-1, Optimal Control, 393–435, Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy.BOOK, Kamien, Morton I.,weblink Dynamic Optimization: the Calculus of Variations and Optimal Control in Economics and Management, 2013, Dover Publications, 978-1-306-39299-0, 869522905, A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.ARXIV, Ross, I. M., Proulx, R. J., Karpenko, M., 2020-05-06, An Optimal Control Theory for the Traveling Salesman Problem and Its Variants, math.OC, 2005.03186, JOURNAL, Ross, Isaac M., Karpenko, Mark, Proulx, Ronald J., 2016-01-01, A Nonsmooth Calculus for Solving Some Graph-Theoretic Control Problems**This research was sponsored by the U.S. Navy., IFAC-PapersOnLine, 10th IFAC Symposium on Nonlinear Control Systems NOLCOS 2016, en, 49, 18, 462–467, 10.1016/j.ifacol.2016.10.208, 2405-8963, free, Optimal control is an extension of the calculus of variations, and is a mathematical optimization method for deriving control policies.JOURNAL, R. W. H., Sargent, Roger W. H. Sargent, Optimal Control, Journal of Computational and Applied Mathematics, 124, 1–2, 2000, 361–371, 10.1016/S0377-0427(00)00418-0, 2000JCoAM.124..361S, free, The method is largely due to the work of Lev Pontryagin and Richard Bellman in the 1950s, after contributions to calculus of variations by Edward J. McShane.JOURNAL, A. E., Bryson, Arthur E. Bryson, 1996, Optimal Control—1950 to 1985, IEEE Control Systems Magazine, 16, 3, 26–33, 10.1109/37.506395, Optimal control can be seen as a control strategy in control theory.

General method

Optimal control deals with the problem of finding a control law for a given system such that a certain optimality criterion is achieved. A control problem includes a cost functional that is a function of state and control variables. An optimal control is a set of differential equations describing the paths of the control variables that minimize the cost function. The optimal control can be derived using Pontryagin's maximum principle (a necessary condition also known as Pontryagin's minimum principle or simply Pontryagin's principle),BOOK, I. M., Ross, I. Michael Ross, 2009, A Primer on Pontryagin's Principle in Optimal Control, Collegiate Publishers, 978-0-9843571-0-9, or by solving the Hamilton–Jacobi–Bellman equation (a sufficient condition).We begin with a simple example. Consider a car traveling in a straight line on a hilly road. The question is, how should the driver press the accelerator pedal in order to minimize the total traveling time? In this example, the term control law refers specifically to the way in which the driver presses the accelerator and shifts the gears. The system consists of both the car and the road, and the optimality criterion is the minimization of the total traveling time. Control problems usually include ancillary constraints. For example, the amount of available fuel might be limited, the accelerator pedal cannot be pushed through the floor of the car, speed limits, etc.A proper cost function will be a mathematical expression giving the traveling time as a function of the speed, geometrical considerations, and initial conditions of the system. Constraints are often interchangeable with the cost function.Another related optimal control problem may be to find the way to drive the car so as to minimize its fuel consumption, given that it must complete a given course in a time not exceeding some amount. Yet another related control problem may be to minimize the total monetary cost of completing the trip, given assumed monetary prices for time and fuel.A more abstract framework goes as follows. Minimize the continuous-time cost functionalJ[textbf{x}(cdot), textbf{u}(cdot), t_0, t_f] := E,[textbf{x}(t_0),t_0,textbf{x}(t_f),t_f] + int_{t_0}^{t_f} F,[textbf{x}(t),textbf{u}(t),t] ,mathrm dtsubject to the first-order dynamic constraints (the state equation)
dot{textbf{x}}(t) = textbf{f},[,textbf{x}(t), textbf{u}(t), t],
the algebraic path constraints
textbf{h},[textbf{x}(t),textbf{u}(t),t] leq textbf{0},
and the endpoint conditionstextbf{e}[textbf{x}(t_0),t_0,textbf{x}(t_f),t_f] = 0where textbf{x}(t) is the state, textbf{u}(t) is the control, t is the independent variable (generally speaking, time), t_0 is the initial time, and t_f is the terminal time. The terms E and F are called the endpoint cost and the running cost respectively. In the calculus of variations, E and F are referred to as the Mayer term and the Lagrangian, respectively. Furthermore, it is noted that the path constraints are in general inequality constraints and thus may not be active (i.e., equal to zero) at the optimal solution. It is also noted that the optimal control problem as stated above may have multiple solutions (i.e., the solution may not be unique). Thus, it is most often the case that any solution [textbf{x}^*(t),textbf{u}^*(t),t_0^*, t_f^*] to the optimal control problem is locally minimizing.

Linear quadratic control

A special case of the general nonlinear optimal control problem given in the previous section is the linear quadratic (LQ) optimal control problem. The LQ problem is stated as follows. Minimize the quadratic continuous-time cost functionalJ=tfrac{1}{2} mathbf{x}^{mathsf{T}}(t_f)mathbf{S}_fmathbf{x}(t_f) + tfrac{1}{2} int_{t_0}^{t_f} [,mathbf{x}^{mathsf{T}}(t)mathbf{Q}(t)mathbf{x}(t) + mathbf{u}^{mathsf{T}}(t)mathbf{R}(t) mathbf{u}(t)], mathrm dtSubject to the linear first-order dynamic constraintsdot{mathbf{x}}(t)= mathbf{A}(t) mathbf{x}(t) + mathbf{B}(t) mathbf{u}(t), and the initial condition
mathbf{x}(t_0) = mathbf{x}_0
A particular form of the LQ problem that arises in many control system problems is that of the linear quadratic regulator (LQR) where all of the matrices (i.e., mathbf{A}, mathbf{B}, mathbf{Q}, and mathbf{R}) are constant, the initial time is arbitrarily set to zero, and the terminal time is taken in the limit t_frightarrowinfty (this last assumption is what is known as infinite horizon). The LQR problem is stated as follows. Minimize the infinite horizon quadratic continuous-time cost functionalJ= tfrac{1}{2} int_{0}^{infty}[mathbf{x}^{mathsf{T}}(t)mathbf{Q}mathbf{x}(t) + mathbf{u}^{mathsf{T}}(t)mathbf{R}mathbf{u}(t)], mathrm dtSubject to the linear time-invariant first-order dynamic constraintsdot{mathbf{x}}(t) = mathbf{A} mathbf{x}(t) + mathbf{B} mathbf{u}(t), and the initial condition
mathbf{x}(t_0) = mathbf{x}_0
In the finite-horizon case the matrices are restricted in that mathbf{Q} and mathbf{R} are positive semi-definite and positive definite, respectively. In the infinite-horizon case, however, the matrices mathbf{Q} and mathbf{R} are not only positive-semidefinite and positive-definite, respectively, but are also constant. These additional restrictions onmathbf{Q} and mathbf{R} in the infinite-horizon case are enforced to ensure that the cost functional remains positive. Furthermore, in order to ensure that the cost function is bounded, the additional restriction is imposed that the pair (mathbf{A},mathbf{B}) is controllable. Note that the LQ or LQR cost functional can be thought of physically as attempting to minimize the control energy (measured as a quadratic form).The infinite horizon problem (i.e., LQR) may seem overly restrictive and essentially useless because it assumes that the operator is driving the system to zero-state and hence driving the output of the system to zero. This is indeed correct. However the problem of driving the output to a desired nonzero level can be solved after the zero output one is. In fact, it can be proved that this secondary LQR problem can be solved in a very straightforward manner. It has been shown in classical optimal control theory that the LQ (or LQR) optimal control has the feedback formmathbf{u}(t) = -mathbf{K}(t)mathbf{x}(t)where mathbf{K}(t) is a properly dimensioned matrix, given asmathbf{K}(t) = mathbf{R}^{-1}mathbf{B}^{mathsf{T}}mathbf{S}(t),and mathbf{S}(t) is the solution of the differential Riccati equation. The differential Riccati equation is given asdot{mathbf{S}}(t) = -mathbf{S}(t)mathbf{A}-mathbf{A}^{mathsf{T}} mathbf{S}(t) +mathbf{S}(t)mathbf{B}mathbf{R}^{-1}mathbf{B}^{mathsf{T}}mathbf{S}(t) - mathbf{Q}For the finite horizon LQ problem, the Riccati equation is integrated backward in time using the terminal boundary conditionmathbf{S}(t_f) = mathbf{S}_fFor the infinite horizon LQR problem, the differential Riccati equation is replaced with the algebraic Riccati equation (ARE) given asmathbf{0} = -mathbf{S}mathbf{A}-mathbf{A}^{mathsf{T}}mathbf{S}+mathbf{S}mathbf{B}mathbf{R}^{-1}mathbf{B}^{mathsf{T}}mathbf{S}-mathbf{Q}Understanding that the ARE arises from infinite horizon problem, the matrices mathbf{A}, mathbf{B}, mathbf{Q}, and mathbf{R} are all constant. It is noted that there are in general multiple solutions to the algebraic Riccati equation and the positive definite (or positive semi-definite) solution is the one that is used to compute the feedback gain. The LQ (LQR) problem was elegantly solved by Rudolf E. Kálmán.Kalman, Rudolf. A new approach to linear filtering and prediction problems. Transactions of the ASME, Journal of Basic Engineering, 82:34–45, 1960

Numerical methods for optimal control

Optimal control problems are generally nonlinear and therefore, generally do not have analytic solutions (e.g., like the linear-quadratic optimal control problem). As a result, it is necessary to employ numerical methods to solve optimal control problems. In the early years of optimal control ({{abbr|c.|circa}} 1950s to 1980s) the favored approach for solving optimal control problems was that of indirect methods. In an indirect method, the calculus of variations is employed to obtain the first-order optimality conditions. These conditions result in a two-point (or, in the case of a complex problem, a multi-point) boundary-value problem. This boundary-value problem actually has a special structure because it arises from taking the derivative of a Hamiltonian. Thus, the resulting dynamical system is a Hamiltonian system of the formbegin{align}dot{textbf{x}} & = frac{partial H}{partialboldsymbol{lambda}} [1.2ex]dot{boldsymbol{lambda}} & = -frac{partial H}{partialtextbf{x}}end{align}whereH= F +boldsymbol{lambda}^{mathsf{T}}textbf{f}- boldsymbol{mu}^{mathsf{T}}textbf{h}is the augmented Hamiltonian and in an indirect method, the boundary-value problem is solved (using the appropriate boundary or transversality conditions). The beauty of using an indirect method is that the state and adjoint (i.e., boldsymbol{lambda}) are solved for and the resulting solution is readily verified to be an extremal trajectory. The disadvantage of indirect methods is that the boundary-value problem is often extremely difficult to solve (particularly for problems that span large time intervals or problems with interior point constraints). A well-known software program that implements indirect methods is BNDSCO.Oberle, H. J. and Grimm, W., "BNDSCO-A Program for the Numerical Solution of Optimal Control Problems," Institute for Flight Systems Dynamics, DLR, Oberpfaffenhofen, 1989The approach that has risen to prominence in numerical optimal control since the 1980s is that of so-called direct methods. In a direct method, the state or the control, or both, are approximated using an appropriate function approximation (e.g., polynomial approximation or piecewise constant parameterization). Simultaneously, the cost functional is approximated as a cost function. Then, the coefficients of the function approximations are treated as optimization variables and the problem is "transcribed" to a nonlinear optimization problem of the form:Minimize
F(mathbf{z})
subject to the algebraic constraints
begin{align}
mathbf{g}(mathbf{z}) & = mathbf{0} mathbf{h}(mathbf{z}) & leq mathbf{0}end{align} Depending upon the type of direct method employed, the size of the nonlinear optimization problem can be quite small (e.g., as in a direct shooting or quasilinearization method), moderate (e.g. pseudospectral optimal controlJOURNAL, I. Michael Ross, I. M., Ross, M., Karpenko, A Review of Pseudospectral Optimal Control: From Theory to Flight, Annual Reviews in Control, 36, 2, 182–197, 2012, 10.1016/j.arcontrol.2012.09.002, Pseudospectral optimal control, ) or may be quite large (e.g., a direct collocation methodBOOK, Betts, J. T., Practical Methods for Optimal Control Using Nonlinear Programming, SIAM Press, Philadelphia, Pennsylvania, 2nd, 2010, 978-0-89871-688-7, ). In the latter case (i.e., a collocation method), the nonlinear optimization problem may be literally thousands to tens of thousands of variables and constraints. Given the size of many NLPs arising from a direct method, it may appear somewhat counter-intuitive that solving the nonlinear optimization problem is easier than solving the boundary-value problem. It is, however, the fact that the NLP is easier to solve than the boundary-value problem. The reason for the relative ease of computation, particularly of a direct collocation method, is that the NLP is sparse and many well-known software programs exist (e.g., SNOPTGill, P. E., Murray, W. M., and Saunders, M. A., User's Manual for SNOPT Version 7: Software for Large-Scale Nonlinear Programming, University of California, San Diego Report, 24 April 2007) to solve large sparse NLPs. As a result, the range of problems that can be solved via direct methods (particularly direct collocation methods which are very popular these days) is significantly larger than the range of problems that can be solved via indirect methods. In fact, direct methods have become so popular these days that many people have written elaborate software programs that employ these methods. In particular, many such programs include DIRCOL,von Stryk, O., User's Guide for DIRCOL (version 2.1): A Direct Collocation Method for the Numerical Solution of Optimal Control Problems, Fachgebiet Simulation und Systemoptimierung (SIM), Technische Universität Darmstadt (2000, Version of November 1999). SOCS,Betts, J.T. and Huffman, W. P., Sparse Optimal Control Software, SOCS, Boeing Information and Support Services, Seattle, Washington, July 1997 OTIS,JOURNAL, Hargraves, C. R., Paris, S. W., Direct Trajectory Optimization Using Nonlinear Programming and Collocation, Journal of Guidance, Control, and Dynamics, 10, 4, 1987, 338–342, 10.2514/3.20223, 1987JGCD...10..338H, GESOP/ASTOS,Gath, P.F., Well, K.H., "Trajectory Optimization Using a Combination of Direct Multiple Shooting and Collocation", AIAA 2001–4047, AIAA Guidance, Navigation, and Control Conference, Montréal, Québec, Canada, 6–9 August 2001 DITAN.Vasile M., Bernelli-Zazzera F., Fornasari N., Masarati P., "Design of Interplanetary and Lunar Missions Combining Low-Thrust and Gravity Assists", Final Report of the ESA/ESOC Study Contract No. 14126/00/D/CS, September 2002 and PyGMO/PyKEP.Izzo, Dario. "PyGMO and PyKEP: open source tools for massively parallel optimization in astrodynamics (the case of interplanetary trajectory optimization)." Proceed. Fifth International Conf. Astrodynam. Tools and Techniques, ICATT. 2012. In recent years, due to the advent of the MATLAB programming language, optimal control software in MATLAB has become more common. Examples of academically developed MATLAB software tools implementing direct methods include RIOTS,RIOTS {{Webarchive| url=https://web.archive.org/web/20110716014630weblink |date=16 July 2011 }}, based on THESIS, Ph.D., Schwartz, Adam, 1996, Theory and Implementation of Methods based on Runge–Kutta Integration for Solving Optimal Control Problems, University of California at Berkeley, 35140322, DIDO,Ross, I. M., Enhancements to the DIDO Optimal Control Toolbox, arXiv 2020.weblink DIRECT,Williams, P., User's Guide to DIRECT, Version 2.00, Melbourne, Australia, 2008 FALCON.m,FALCON.m, described in Rieck, M., Bittner, M., Grüter, B., Diepolder, J., and Piprek, P., FALCON.m - User Guide, Institute of Flight System Dynamics, Technical University of Munich, October 2019 and GPOPS,GPOPS {{Webarchive|url=https://web.archive.org/web/20110724074641weblink |date=24 July 2011 }}, described in Rao, A. V., Benson, D. A., Huntington, G. T., Francolin, C., Darby, C. L., and Patterson, M. A., User's Manual for GPOPS: A MATLAB Package for Dynamic Optimization Using the Gauss Pseudospectral Method, University of Florida Report, August 2008. while an example of an industry developed MATLAB tool is PROPT.Rutquist, P. and Edvall, M. M, ''PROPT – MATLAB Optimal Control Software," 1260 S.E. Bishop Blvd Ste E, Pullman, WA 99163, USA: Tomlab Optimization, Inc. These software tools have increased significantly the opportunity for people to explore complex optimal control problems both for academic research and industrial problems.I.M. Ross, Computational Optimal Control, 3rd Workshop in Computational Issues in Nonlinear Control, October 8th, 2019, Monterey, CA Finally, it is noted that general-purpose MATLAB optimization environments such as TOMLAB have made coding complex optimal control problems significantly easier than was previously possible in languages such as C and FORTRAN.

Discrete-time optimal control

The examples thus far have shown continuous time systems and control solutions. In fact, as optimal control solutions are now often implemented digitally, contemporary control theory is now primarily concerned with discrete time systems and solutions. The Theory of Consistent ApproximationsE. Polak, On the use of consistent approximations in the solution of semi-infinite optimization and optimal control problems Math. Prog. 62 pp. 385–415 (1993).JOURNAL, Ross, I M., 2005-12-01, A Roadmap for Optimal Control: The Right Way to Commute,weblink Annals of the New York Academy of Sciences, 1065, 1, 210–231, 10.1196/annals.1370.015, 16510411, 2005NYASA1065..210R, 7625851, 0077-8923, provides conditions under which solutions to a series of increasingly accurate discretized optimal control problem converge to the solution of the original, continuous-time problem. Not all discretization methods have this property, even seemingly obvious ones.JOURNAL, Fahroo, Fariba, Ross, I. Michael, September 2008, Convergence of the Costates Does Not Imply Convergence of the Control,weblink Journal of Guidance, Control, and Dynamics, 31, 5, 1492–1497, 10.2514/1.37331, 2008JGCD...31.1492F, 756939, 0731-5090, For instance, using a variable step-size routine to integrate the problem's dynamic equations may generate a gradient which does not converge to zero (or point in the right direction) as the solution is approached. The direct method RIOTS is based on the Theory of Consistent Approximation.

Examples

{{unreferenced section|date=April 2018}}A common solution strategy in many optimal control problems is to solve for the costate (sometimes called the shadow price) lambda(t). The costate summarizes in one number the marginal value of expanding or contracting the state variable next turn. The marginal value is not only the gains accruing to it next turn but associated with the duration of the program. It is nice when lambda(t) can be solved analytically, but usually, the most one can do is describe it sufficiently well that the intuition can grasp the character of the solution and an equation solver can solve numerically for the values.Having obtained lambda(t), the turn-t optimal value for the control can usually be solved as a differential equation conditional on knowledge of lambda(t). Again it is infrequent, especially in continuous-time problems, that one obtains the value of the control or the state explicitly. Usually, the strategy is to solve for thresholds and regions that characterize the optimal control and use a numerical solver to isolate the actual choice values in time.

Finite time

{{confusing|section|reason=the law of evolution mentioned in the example is not mentioned in the article and is probably not the same as evolution|date=October 2018}}Consider the problem of a mine owner who must decide at what rate to extract ore from their mine. They own rights to the ore from date 0 to date T. At date 0 there is x_0 ore in the ground, and the time-dependent amount of ore x(t) left in the ground declines at the rate of u(t) that the mine owner extracts it. The mine owner extracts ore at cost u(t)^2/x(t) (the cost of extraction increasing with the square of the extraction speed and the inverse of the amount of ore left) and sells ore at a constant price p. Any ore left in the ground at time T cannot be sold and has no value (there is no "scrap value"). The owner chooses the rate of extraction varying with time u(t) to maximize profits over the period of ownership with no time discounting.{{ordered list| 1 = Discrete-time version{{pb}}The manager maximizes profit Pi:Pi = sum_{t=0}^{T-1} left[ pu_t - frac{u_t^2}{x_t} right] subject to the law of motion for the state variable x_tx_{t+1} - x_t = - u_tForm the Hamiltonian and differentiate:begin{align}H &= pu_t - frac{u_t^2}{x_t} - lambda_{t+1} u_t frac{partial H}{partial u_t} &= p - lambda_{t+1} - 2frac{u_t}{x_t} = 0 lambda_{t+1} - lambda_t &= -frac{partial H}{partial x_t} = -left( frac{u_t}{x_t} right)^2end{align}As the mine owner does not value the ore remaining at time T,lambda_T = 0Using the above equations, it is easy to solve for the x_t and lambda_t seriesbegin{align}lambda_t &= lambda_{t+1} + frac{left(p-lambda_{t+1}right)^2}{4} x_{t+1} &= x_t frac{2 - p + lambda_{t+1}}{2}end{align}and using the initial and turn-T conditions, the x_t series can be solved explicitly, giving u_t.| 2 = Continuous-time version{{pb}}The manager maximizes profit Pi:Pi = int_0^T left[ pu(t) - frac{u(t)^2}{x(t)} right] dt where the state variable x(t) evolves as follows:
dot x(t) = - u(t)
Form the Hamiltonian and differentiate:begin{align}H &= pu(t) - frac{u(t)^2}{x(t)} - lambda(t) u(t) frac{partial H}{partial u} &= p - lambda(t) - 2frac{u(t)}{x(t)} = 0 dotlambda(t) &= -frac{partial H}{partial x} = -left( frac{u(t)}{x(t)} right)^2end{align}As the mine owner does not value the ore remaining at time T,lambda(T) = 0Using the above equations, it is easy to solve for the differential equations governing u(t) and lambda(t)begin{align}dotlambda(t) &= -frac{(p-lambda(t))^2}{4} u(t) &= x(t) frac{p- lambda(t)}{2}end{align}and using the initial and turn-T conditions, the functions can be solved to yieldx(t) = frac{left(4-pt+pTright)^2}{left(4+pTright)^2} x_0 }}

See also

{{colbegin|colwidth=22em}} {{colend}}

References

{{Reflist|30em}}

Further reading

  • BOOK, D. P., Bertsekas, Dimitri Bertsekas, Dynamic Programming and Optimal Control, Belmont, Athena, 1995, 1-886529-11-6,
  • BOOK, Bryson, A. E., Arthur E. Bryson, Ho, Y.-C., Yu-Chi Ho, Applied Optimal Control: Optimization, Estimation and Control, Revised, John Wiley and Sons, New York, 1975, 0-470-11481-9,weblink
  • BOOK, W. H., Fleming, Wendell Fleming, R. W., Rishel, Raymond Rishel, Deterministic and Stochastic Optimal Control, New York, Springer, 1975, 0-387-90155-8,weblink
  • BOOK, M. I., Kamien, Morton Kamien, N. L., Schwartz, Nancy Schwartz, Dynamic Optimization: The Calculus of Variations and Optimal Control in Economics and Management, New York, Elsevier, Second, 1991, 0-444-01609-0,weblink
  • BOOK, Kirk, D. E., Donald E. Kirk, 1970, Optimal Control Theory: An Introduction, Englewood Cliffs, Prentice-Hall, 0-13-638098-0,weblink

External links

{{Use dmy dates|date=April 2020}} {{Authority control}}

- content above as imported from Wikipedia
- "optimal control" does not exist on GetWiki (yet)
- time: 4:02am EDT - Sat, May 18 2024
[ this remote article is provided by Wikipedia ]
LATEST EDITS [ see all ]
GETWIKI 23 MAY 2022
GETWIKI 09 JUL 2019
Eastern Philosophy
History of Philosophy
GETWIKI 09 MAY 2016
GETWIKI 18 OCT 2015
M.R.M. Parrott
Biographies
GETWIKI 20 AUG 2014
CONNECT